Fourier analysis lets you decompose complex signals into sums of simple sinusoids, then work with those frequency components individually. In control theory, this is essential because it connects the time domain (where you observe signals) to the frequency domain (where you design controllers and analyze system behavior). This topic covers Fourier series, Fourier transforms, their discrete counterparts, and the related Laplace and Z-transforms that tie everything together.
Fourier series representation
A Fourier series represents a periodic function as an infinite sum of sinusoids at different frequencies and amplitudes. This is the starting point for all of Fourier analysis: if you can break a repeating signal into sine and cosine components, you can study each component separately.
Periodic functions
A function is periodic if there exists a positive constant such that for all . The smallest such is called the fundamental period.
Common examples include sine waves (smooth oscillation), square waves (abrupt transitions between two levels), and sawtooth waves (linear ramp that resets periodically). Each of these looks very different in the time domain, but all can be expressed as sums of sinusoids.
Trigonometric series
The general form of a Fourier series is:
- is the DC component (the average value of the function over one period)
- and are the Fourier coefficients that set the amplitude of each harmonic
- is the fundamental frequency in radians per second
Each term is called the th harmonic. The term oscillates at the fundamental frequency, at twice that frequency, and so on.
Fourier coefficients
To find the coefficients, you integrate the signal multiplied by the corresponding sinusoid over one full period:
The intuition: each integral "picks out" how much of that particular frequency is present in . If the signal has no energy at frequency , the corresponding coefficient will be zero.
Convergence of Fourier series
The Fourier series converges to the original function under the Dirichlet conditions: the function must have a finite number of discontinuities, a finite number of extrema, and be absolutely integrable over one period. Most signals you'll encounter in control theory satisfy these conditions.
Convergence can be pointwise, uniform, or in the mean-square sense. At points of discontinuity, the series converges to the midpoint of the left and right limits.
The Gibbs phenomenon is worth knowing: when you truncate a Fourier series to approximate a discontinuous function, you'll see overshoots of about 9% near the discontinuities. Adding more terms makes the overshoot narrower but doesn't eliminate it.
Fourier transforms
Fourier series handle periodic signals, but real-world signals are often non-periodic. The Fourier transform generalizes the Fourier series to handle signals that don't repeat, giving you a continuous frequency spectrum instead of discrete harmonics.
Fourier transform definition
The Fourier transform of a continuous-time signal is:
The inverse Fourier transform recovers the time-domain signal:
is generally complex-valued. Its magnitude tells you how much of each frequency is present, and its phase tells you the timing offset of each frequency component.
Fourier transform properties
Several properties make Fourier transforms practical to work with. Rather than computing integrals from scratch every time, you can use these properties to build up transforms of complex signals from simpler ones.
Linearity and scaling
- Linearity: If and , then
- Scaling: If , then
The scaling property has a useful physical interpretation: compressing a signal in time () spreads its spectrum out in frequency, and vice versa. A short pulse has a wide bandwidth; a long pulse has a narrow bandwidth.
Time and frequency shifting
- Time shifting:
- Frequency shifting:
A delay in time doesn't change the magnitude spectrum; it only adds a linear phase shift. Frequency shifting (multiplying by a complex exponential) slides the entire spectrum along the frequency axis, which is the basis of modulation in communications.
Convolution and modulation
- Convolution:
- Modulation:
The convolution property is arguably the most important for control theory. It means that passing a signal through a linear time-invariant (LTI) system, which involves convolution with the impulse response in the time domain, becomes simple multiplication in the frequency domain.
Discrete Fourier transforms (DFT)
When you work with digital systems, signals are sampled and stored as finite sequences. The DFT is the tool for frequency analysis of these discrete, finite-length signals.

DFT definition and properties
For a discrete-time signal of length :
The inverse DFT recovers the time-domain sequence:
The DFT shares many properties with the continuous Fourier transform (linearity, shifting, convolution), but the shifting and convolution are circular because the DFT implicitly treats the sequence as periodic with period .
Fast Fourier transform (FFT) algorithms
Computing the DFT directly requires operations. The FFT is an algorithm that computes the same result in operations by exploiting symmetry and periodicity in the complex exponentials.
The most common variant is the Cooley-Tukey radix-2 algorithm, which recursively splits an -point DFT into two -point DFTs (requiring to be a power of 2). For a 1024-point signal, this reduces the operation count from about 1,000,000 to roughly 10,000. The FFT is what makes real-time spectral analysis practical.
Circular convolution
Circular convolution is the DFT's version of convolution:
In the DFT domain, this becomes pointwise multiplication: .
If you actually want linear convolution (the standard kind), you need to zero-pad both sequences to length before taking the DFT. Otherwise the circular wrap-around will corrupt your result.
Zero-padding and aliasing
- Zero-padding means appending zeros to a sequence before computing the DFT. This doesn't add new information, but it interpolates between frequency bins, giving you a finer-grained view of the spectrum.
- Aliasing occurs when the sampling rate is too low to capture the signal's highest frequency components. High frequencies get "folded" into lower frequency bins, producing a distorted spectrum that can't be corrected after the fact.
Applications of Fourier analysis
Signal processing and filtering
Fourier transforms let you design filters by specifying what to keep and what to remove in the frequency domain:
- Low-pass filters pass frequencies below a cutoff and attenuate higher ones (useful for removing high-frequency noise)
- High-pass filters do the opposite (useful for removing DC offset or slow drift)
- Band-pass and band-stop filters target specific frequency ranges
In practice, you transform the signal, multiply by the filter's frequency response, and inverse-transform back. This is often more efficient than time-domain convolution for long signals.
Frequency response of systems
The frequency response describes how an LTI system amplifies or attenuates each frequency and how much phase shift it introduces. You obtain it by evaluating the transfer function on the imaginary axis:
The magnitude gives the gain at each frequency, and gives the phase shift. This is the foundation for Bode plots and Nyquist plots, which you'll use extensively in control design.
Spectral analysis and synthesis
Spectral analysis decomposes a signal into its frequency components to identify dominant frequencies, measure power spectral density, or detect periodicities hidden in noisy data.
Spectral synthesis goes the other direction: you construct a signal with specific frequency content by summing sinusoids with chosen amplitudes and phases. This is useful for generating test signals or simulating disturbances.
Control system design using frequency domain
Frequency-domain methods are central to classical control design:
- Bode plots show gain and phase vs. frequency on logarithmic scales, making it easy to read off gain margin and phase margin
- Nyquist plots map the open-loop frequency response onto the complex plane, providing a graphical stability criterion
- Nichols charts combine gain and phase information for closed-loop performance analysis
Controllers (e.g., lead, lag, PID) are often designed by shaping the open-loop frequency response to achieve desired bandwidth, disturbance rejection, and robustness specifications.
Laplace transforms vs Fourier transforms
The Laplace transform is a generalization of the Fourier transform that can handle a broader class of signals, including growing exponentials and transient signals that don't have a Fourier transform in the classical sense.
Laplace transform definition and properties
The (unilateral) Laplace transform of is:
Here is a complex variable. The real part provides an exponential weighting factor that can make otherwise non-convergent integrals converge. Laplace transforms share the same key properties as Fourier transforms: linearity, scaling, time-shifting, and the convolution-to-multiplication correspondence.

Relationship between Laplace and Fourier transforms
The Fourier transform is a special case of the Laplace transform evaluated on the imaginary axis:
This relationship holds when the region of convergence (ROC) of the Laplace transform includes the imaginary axis, which is the case for stable systems. For unstable systems, the Fourier transform may not exist, but the Laplace transform still works because the factor forces convergence.
Stability analysis using Laplace transforms
The poles of a system's transfer function directly determine stability:
- Poles in the left-half plane (): stable, decaying transients
- Poles on the imaginary axis (): marginally stable, sustained oscillations
- Poles in the right-half plane (): unstable, growing transients
The pole locations also tell you about transient response characteristics. Poles farther left decay faster. Complex pole pairs produce oscillatory responses, with the imaginary part setting the oscillation frequency and the real part setting the decay rate.
Inverse Laplace transforms and partial fraction expansion
To go from back to , you typically use partial fraction expansion:
- Factor the denominator of into its roots (the poles)
- Decompose into a sum of simpler fractions, each with one pole
- Look up or recognize the inverse transform of each fraction (e.g., )
- Sum the individual time-domain terms
This technique is the standard way to find step responses, impulse responses, and general transient behavior of LTI systems.
Z-transforms
The Z-transform is the discrete-time counterpart of the Laplace transform. It plays the same role for discrete-time (sampled) systems that the Laplace transform plays for continuous-time systems.
Z-transform definition and properties
The Z-transform of a discrete-time signal is:
Here is a complex variable. The Z-transform maps sequences to functions of , and it shares the familiar properties: linearity, time-shifting (), and convolution becomes multiplication.
Relationship between Z-transforms and Fourier transforms
The discrete-time Fourier transform (DTFT) is obtained by evaluating the Z-transform on the unit circle:
This parallels how the continuous Fourier transform is the Laplace transform evaluated on the imaginary axis. The region of convergence (ROC) of the Z-transform determines whether this evaluation is valid and also encodes stability and causality information about the system.
Discrete-time systems analysis using Z-transforms
Stability criteria in the -domain mirror those in the -domain, but the boundary is the unit circle instead of the imaginary axis:
- Poles inside the unit circle (): stable
- Poles on the unit circle (): marginally stable
- Poles outside the unit circle (): unstable
The mapping between the -plane and -plane is , where is the sampling period. The left-half -plane maps to the interior of the unit circle, which is why the stability boundary shifts from the imaginary axis to the unit circle.
Inverse Z-transforms and partial fraction expansion
The process mirrors the continuous case:
- Express as a ratio of polynomials in
- Perform partial fraction expansion
- Match each term to a known Z-transform pair (e.g., )
- Sum the results
Power series expansion (long division) is another option: divide the numerator by the denominator to get coefficients directly, which are the values of .
Sampling and reconstruction
Sampling converts continuous-time signals to discrete-time, and reconstruction does the reverse. Understanding the limits of this process is critical for any digital control system.
Nyquist-Shannon sampling theorem
The Nyquist-Shannon sampling theorem states that a band-limited signal with maximum frequency can be perfectly reconstructed from its samples if the sampling rate satisfies:
The minimum rate is called the Nyquist rate. For example, audio signals with content up to 20 kHz require a sampling rate of at least 40 kHz (CD audio uses 44.1 kHz, providing some margin).
Aliasing and anti-aliasing filters
When you sample below the Nyquist rate, frequencies above fold back into the range , appearing as lower-frequency components that weren't in the original signal. This is aliasing, and it's irreversible once the signal has been sampled.
Anti-aliasing filters are analog low-pass filters placed before the sampler. They attenuate all frequency content above so that aliasing doesn't occur. In practice, you need some guard band between the signal bandwidth and because real filters don't have infinitely sharp cutoffs.
Ideal vs practical sampling
- Ideal sampling assumes instantaneous samples (impulse sampling) and perfect sinc-function reconstruction. It's a mathematical model, not physically realizable.
- Practical sampling uses sample-and-hold circuits that hold each sample value constant until the next sample. This introduces the aperture effect, which slightly attenuates high frequencies with a sinc-shaped roll-off.
The choice of sampling rate, quantization resolution (number of bits per sample), and reconstruction filter quality all affect the fidelity of the digitized signal.
Signal reconstruction from samples
Reconstruction converts discrete samples back to a continuous signal. The ideal reconstruction filter is a perfect low-pass filter with cutoff at , which corresponds to sinc interpolation in the time domain. Since perfect sinc interpolation requires infinite-length filters, practical systems use approximations:
- Zero-order hold (ZOH): holds each sample constant until the next one. Simple but introduces a staircase effect and sinc-shaped frequency distortion.
- Linear interpolation (first-order hold): draws straight lines between samples. Smoother than ZOH but still imperfect.
- Sinc interpolation: the theoretically ideal method, approximated in practice with windowed sinc filters.
Higher-order interpolation methods get closer to ideal reconstruction at the cost of increased computation and delay.