Sampling of continuous-time signals
Sampling converts a continuous-time signal into a discrete-time signal by capturing the signal's amplitude at regular intervals. This is the first step in getting real-world analog signals into a form that digital computers and microcontrollers can process.
Nyquist-Shannon sampling theorem
The Nyquist-Shannon theorem tells you the minimum rate at which you need to sample a signal to preserve all its information. A continuous-time signal can be perfectly reconstructed from its samples if the sampling frequency is at least twice the highest frequency component in the signal:
This minimum sampling frequency is called the Nyquist rate. If you sample below this rate, you get aliasing, where high-frequency components fold back and masquerade as low-frequency components in the sampled signal. The reconstructed signal then contains frequency content that wasn't in the original, and there's no way to fix it after the fact.
Ideal vs. practical sampling
- Ideal sampling multiplies the continuous-time signal by an infinite train of Dirac delta functions, producing a sequence of impulses whose amplitudes equal the signal's instantaneous values. This is a mathematical abstraction.
- Practical sampling uses a sample-and-hold (S/H) circuit that captures the signal's amplitude and holds it constant until the next sample arrives. The result is a staircase-like approximation of the original signal.
- The sample-and-hold process introduces a small amount of distortion because the holding time is finite and the circuit components behave non-ideally. This distortion follows a envelope in the frequency domain.
Aliasing in sampled signals
Aliasing occurs when the sampling frequency falls below the Nyquist rate. High-frequency components get "reflected" into the baseband and become indistinguishable from legitimate low-frequency content.
For example, if you sample a 900 Hz sine wave at only 1000 Hz, the reconstructed signal appears to be a 100 Hz sine wave. That distortion is permanent once it's in the data.
To prevent aliasing, you place an anti-aliasing filter before the sampler to remove frequency components above .
Anti-aliasing filters
Anti-aliasing filters are low-pass filters that attenuate frequency content above the Nyquist frequency () before sampling takes place.
- An ideal anti-aliasing filter would have a perfectly sharp cutoff at , but real filters always have a gradual roll-off in the transition band.
- Common filter types used include Butterworth (maximally flat passband), Chebyshev (sharper roll-off but with passband ripple), and elliptic (sharpest roll-off but with ripple in both passband and stopband).
- The choice depends on the signal bandwidth, how much attenuation you need in the stopband, and how much phase distortion is acceptable for your application.
Sampling period selection
The sampling period is the time between consecutive samples, and it's the reciprocal of the sampling frequency: . Choosing well is critical for accurate signal representation in a digital control system.
Minimum sampling frequency
The Nyquist theorem sets the theoretical floor at . In practice, you almost always sample faster than this minimum because:
- Real anti-aliasing filters don't have infinitely sharp cutoffs, so you need extra margin.
- For control systems specifically, a common rule of thumb is to sample at 10 to 20 times the system's closed-loop bandwidth to keep the discrete-time controller behavior close to its continuous-time design.
Oversampling benefits
Oversampling means sampling well above the Nyquist rate. It costs more in terms of data throughput and processing, but provides several advantages:
- Reduced aliasing risk: aliased components get pushed further from the signal band, making them easier to filter out.
- Improved SNR: quantization noise spreads over a wider frequency range, so less of it falls within the signal band. Each doubling of the oversampling ratio yields roughly a 3 dB improvement in SNR.
- Simpler anti-aliasing filters: with more spectral room between the signal band and the aliasing region, you can use lower-order filters with gentler roll-off.
Sampling jitter effects
Sampling jitter refers to random variations in the timing of each sample, caused by clock instability, electrical noise, or other imperfections in the sampling circuitry.
- Jitter effectively adds noise to the sampled signal. For a sinusoidal signal of frequency and amplitude , the RMS noise due to jitter is approximately .
- This means jitter effects worsen at higher signal frequencies. A system that performs fine at low frequencies can become jitter-limited when processing faster signals.
- Jitter ultimately caps the effective resolution you can achieve, regardless of how many bits your ADC has.
Signal reconstruction
Signal reconstruction converts a discrete-time signal back into a continuous-time signal. Different interpolation methods offer different trade-offs between complexity and accuracy.
Zero-order hold
The zero-order hold (ZOH) is the simplest reconstruction method. It holds each sample value constant until the next sample arrives, producing a staircase waveform.
- ZOH is what most DACs do by default.
- The frequency-domain effect is multiplication by a function, which attenuates higher frequencies and introduces a phase lag proportional to frequency.
- The abrupt transitions between steps contain high-frequency energy that typically needs to be filtered out with a reconstruction (smoothing) filter.

First-order hold
The first-order hold (FOH) draws straight lines between consecutive samples, producing a piecewise-linear approximation.
- This gives a smoother output than ZOH and reduces high-frequency distortion.
- It still introduces error because it assumes the signal changes linearly between samples, which is only an approximation.
Sinc interpolation
Sinc interpolation is the theoretically perfect reconstruction method. If the original signal was sampled at or above the Nyquist rate, convolving the samples with a function () recovers the original continuous-time signal exactly.
- Perfect sinc interpolation requires an infinite number of samples and a perfect low-pass filter, so it's not physically realizable.
- In practice, truncated sinc interpolation (using a windowed sinc function over a finite number of samples) provides a good approximation with some small reconstruction error.
Quantization of sampled signals
Quantization maps the continuous amplitude range of a sampled signal to a finite set of discrete levels. This step is necessary because digital systems represent values with a fixed number of bits.
Uniform vs. non-uniform quantization
- Uniform quantization divides the full input range into equally spaced intervals. If you have an -bit quantizer with input range , each step size (LSB) is .
- Non-uniform quantization uses smaller step sizes where you need more precision and larger steps elsewhere. A common example is logarithmic quantization (used in audio companding with -law or A-law), which gives finer resolution for small signals.
- Uniform quantization is simpler to implement and analyze. Non-uniform quantization is useful when the signal's probability distribution is non-uniform and you want to minimize overall distortion.
Quantization error
Quantization error is the difference between the true analog value and its quantized representation. For a uniform quantizer with step size :
- The error is bounded between and (for rounding quantization).
- This error is often modeled as additive white noise, uniformly distributed over that interval, with variance .
- More bits means a smaller , which means less quantization error.
Signal-to-quantization-noise ratio (SQNR)
SQNR measures how much the quantization noise degrades the signal. For a full-scale sinusoidal input with an -bit uniform quantizer:
Each additional bit of resolution adds about 6 dB of SQNR. So a 12-bit ADC gives roughly 74 dB of SQNR, while a 16-bit ADC gives roughly 98 dB.
Dithering techniques
Dithering adds a small amount of random noise to the input signal before quantization. This sounds counterintuitive, but it works because:
- Without dither, quantization error is correlated with the input signal, producing harmonic distortion that's perceptually objectionable.
- Dither randomizes the error, converting it into broadband noise that's less noticeable and easier to handle with filtering.
Common types include uniform dither (rectangular PDF noise), triangular dither (triangular PDF noise, which fully decorrelates the error), and noise shaping (which pushes the dithered noise energy into frequency bands where it matters less).
Analog-to-digital converters (ADCs)
ADCs convert continuous-time, continuous-amplitude analog signals into discrete-time, discrete-amplitude digital signals. The choice of ADC architecture depends on your requirements for resolution, speed, power, and cost.
Flash ADCs
Flash ADCs compare the input voltage against all possible quantization levels simultaneously using a bank of comparators (for bits). A priority encoder then converts the comparator outputs into a binary code.
- Speed: the fastest ADC type, capable of gigasamples per second.
- Drawback: the number of comparators doubles with each added bit. A 10-bit flash ADC needs 1023 comparators, making high-resolution flash ADCs expensive and power-hungry.
- Best suited for low-resolution, very high-speed applications (e.g., 6-8 bits in RF and communications).
Successive approximation ADCs
Successive approximation register (SAR) ADCs use a binary search to find the digital value closest to the input:
- The MSB is set to 1, and an internal DAC generates the corresponding voltage.
- A comparator checks whether the input is above or below that voltage.
- If the input is above, the bit stays at 1; if below, it's set to 0.
- The process repeats for each bit from MSB to LSB.
An -bit conversion takes clock cycles. SAR ADCs are slower than flash but far more efficient in terms of power and area. They're the workhorse for medium-speed, medium-to-high resolution applications (10-16 bits, up to a few MSPS).

Sigma-delta ADCs
Sigma-delta () ADCs combine heavy oversampling with noise shaping to achieve very high resolution:
- The input is sampled at many times the Nyquist rate (often 64x to 256x oversampling).
- A 1-bit (or low-bit) quantizer digitizes the signal coarsely.
- The quantization error is fed back and subtracted from the input in a loop, shaping the noise spectrum so most quantization noise is pushed to high frequencies.
- A digital decimation filter removes the high-frequency noise and reduces the data rate to the desired output rate.
Sigma-delta ADCs excel at high-resolution (16-24 bits), lower-speed applications like sensor measurement, audio, and precision instrumentation.
ADC resolution and speed
- Resolution (number of bits) determines the smallest detectable input change. For an -bit ADC with full-scale range , the LSB size is .
- Speed (samples per second) determines the maximum signal bandwidth you can digitize without aliasing.
- There's a fundamental trade-off: higher resolution generally means lower maximum sampling rate for a given architecture and power budget. Flash ADCs are fast but low-resolution; sigma-delta ADCs are high-resolution but slow.
Digital-to-analog converters (DACs)
DACs perform the reverse of ADCs: they take a digital code and produce a corresponding analog voltage or current. In control systems, DACs generate the analog actuator commands from the digital controller's output.
Weighted resistor DACs
A weighted resistor DAC uses resistors with binary-weighted values (). Each bit of the digital input controls a switch that connects the corresponding resistor to a summing amplifier.
- Simple in concept, but the wide range of resistor values (a 10-bit DAC needs a 512:1 ratio) makes it hard to manufacture with good precision.
- Rarely used for resolutions above 8 bits due to component matching limitations.
R-2R ladder DACs
The R-2R ladder uses only two resistor values ( and ) arranged in a ladder network. Each bit controls a switch that routes current either to the output summing node or to ground.
- The ladder structure ensures each successive bit contributes exactly half the current of the previous bit, producing binary weighting naturally.
- Much easier to manufacture accurately than weighted resistor DACs because only two resistor values need to be matched.
- This is the most common resistor-based DAC architecture.
Pulse-width modulation (PWM) DACs
PWM DACs don't use resistor networks at all. Instead, they generate a digital pulse train whose duty cycle is proportional to the desired output value, then pass it through a low-pass filter to extract the average (DC) voltage.
- Very simple to implement with standard digital hardware (a timer/counter peripheral on a microcontroller).
- Resolution can be increased by raising the PWM frequency or the timer clock rate.
- The low-pass filter must be designed carefully to adequately suppress the PWM switching frequency while preserving the signal bandwidth. Residual ripple at the switching frequency is a common issue.
DAC resolution and speed
- Resolution determines the smallest output voltage step. An -bit DAC with output range has a step size of .
- Speed (update rate) determines how fast the output can change. For control applications, the DAC update rate must match or exceed the controller's sampling rate.
- Higher resolution and higher speed both increase cost and design complexity.
Practical considerations
Real digital control implementations face several issues beyond the idealized sampling and quantization theory. These finite-precision effects can degrade performance or even destabilize a system if not addressed.
Finite word length effects
Digital controllers use fixed-point or floating-point arithmetic with limited precision. This means every multiplication and addition introduces small rounding errors. Over many operations (especially in recursive filters and controllers), these errors can accumulate and cause the actual system behavior to deviate from the designed response.
Using higher-precision arithmetic (e.g., 32-bit instead of 16-bit) and careful signal scaling are the primary defenses.
Coefficient quantization
When you design a digital filter or controller in theory, the coefficients are real numbers with infinite precision. Implementing them on hardware means rounding those coefficients to fit the available word length.
- Quantized coefficients shift the locations of poles and zeros, which changes the frequency response and can even push a stable pole outside the unit circle, causing instability.
- Second-order sections (cascade or parallel form) are more robust to coefficient quantization than high-order direct-form implementations, because each section's poles depend on only a few coefficients.
Limit cycles in digital systems
Limit cycles are small, self-sustained oscillations that can appear in digital systems due to the nonlinear nature of quantization (rounding or truncation) within feedback loops.
- Even if the input is zero, a system with limit cycles can oscillate indefinitely at a small amplitude.
- They're more likely in systems implemented with low word lengths or aggressive truncation.
- Mitigation strategies include using higher-precision arithmetic, adding dither to break up the deterministic rounding patterns, and designing with adequate stability margins.
Overflow and saturation
Overflow happens when an arithmetic result exceeds the maximum (or minimum) value the digital representation can hold. In two's complement arithmetic, overflow causes the value to wrap around, producing a large error with the wrong sign.
Saturation arithmetic clamps the result to the maximum or minimum representable value instead of wrapping. This is less destructive than wraparound but still introduces distortion.
- Proper signal scaling throughout the system is the best prevention. You want to ensure that intermediate values stay within the representable range under worst-case conditions.
- Many DSP processors include hardware saturation modes specifically for this purpose.