Sampling of continuous-time signals
Sampling converts a continuous-time signal into a discrete-time signal by capturing values at regular intervals. This is the first step in any analog-to-digital conversion pipeline, and the choices you make here determine whether the digital representation faithfully preserves the original signal.
Nyquist sampling theorem
The Nyquist theorem gives you the fundamental constraint: a bandlimited continuous-time signal can be perfectly reconstructed from its samples if and only if the sampling rate is at least twice the highest frequency component in the signal.
where is the sampling rate and is the highest frequency present. This minimum rate is called the Nyquist rate. Sampling below this rate causes aliasing, which is irreversible.
A subtlety worth noting: the theorem assumes the signal is strictly bandlimited (no energy above ). Real-world signals are never perfectly bandlimited, which is why anti-aliasing filters and practical margin above the Nyquist rate matter so much.
Sampling rate vs signal bandwidth
The bandwidth of a signal is the range of frequencies it contains. For a baseband signal spanning to , the bandwidth equals , and the Nyquist rate is .
For bandpass signals (signals occupying a band not starting at DC), the situation is different. You don't always need to sample at twice the highest frequency. Bandpass sampling allows you to sample at a rate related to the signal's bandwidth rather than its absolute highest frequency, as long as you carefully avoid spectral overlap. This is relevant in applications like software-defined radio.
Aliasing in undersampled signals
When you sample below the Nyquist rate, frequency components above fold back into the range . In the frequency domain, the periodic spectral replicas created by sampling overlap with the baseband spectrum.
The result: a high-frequency component at frequency appears at the aliased frequency for some integer , and you cannot distinguish it from a genuine component at that lower frequency. This distortion is permanent. No amount of post-processing can undo aliasing once it's baked into the samples.
Anti-aliasing filters for sampling
An anti-aliasing filter is a low-pass filter placed before the sampler. Its job is to attenuate all frequency content above so that aliasing is negligible.
- An ideal anti-aliasing filter has a brick-wall cutoff at , passing everything below and rejecting everything above. This is physically unrealizable.
- Practical anti-aliasing filters have a transition band between the passband edge and the stopband. This is one reason systems often sample slightly above the strict Nyquist rate: the extra margin gives the filter's transition band room to roll off before the folding frequency.
- Filter design involves trading off passband ripple, stopband attenuation, and transition bandwidth. Higher-order filters give sharper cutoffs but add complexity, phase distortion, and cost.
Sampling of discrete-time signals
Changing the sampling rate of an already-discrete signal is called sample rate conversion. This comes up constantly when you need to interface systems running at different rates or when you want to reduce computational load.
Upsampling vs downsampling
- Upsampling by factor : insert zeros between each original sample. This increases the sample rate by but introduces spectral images (replicas of the original spectrum) that need to be removed.
- Downsampling by factor : keep every -th sample and discard the rest. This decreases the sample rate by but risks aliasing if the signal has energy above (in normalized frequency).
The time-frequency duality is clean here: upsampling stretches the time axis and compresses the frequency axis, while downsampling does the opposite.
Interpolation in upsampling
Zero-insertion alone doesn't give you a useful higher-rate signal. The inserted zeros create unwanted spectral images. You need an interpolation filter (a low-pass filter with cutoff ) to suppress those images and reconstruct smooth intermediate values.
Common interpolation approaches:
- Zero-order hold: replaces each zero with the previous sample value. Simple but introduces a sinc-shaped spectral droop.
- Linear interpolation: connects adjacent original samples with straight lines. Better than zero-order hold but still has limited frequency-domain accuracy.
- Sinc interpolation: the theoretically ideal method, using a sinc kernel. In practice, you approximate it with a windowed FIR filter.
Decimation in downsampling
Decimation is downsampling done properly. The steps are:
- Apply a low-pass anti-aliasing filter with cutoff at to the input signal.
- Downsample by factor (keep every -th sample).
The filter in step 1 prevents aliasing by removing frequency content that would fold back after downsampling. Skipping this filter is a common source of artifacts.
Decimation is widely used to reduce data rates and computational cost when the full bandwidth of the original signal isn't needed.

Resampling of discrete-time signals
To change the sampling rate by a rational factor :
- Upsample by (insert zeros between samples).
- Filter with a low-pass filter at cutoff .
- Downsample by .
A single filter handles both the interpolation (for upsampling) and anti-aliasing (for downsampling). In practice, polyphase filter implementations avoid computing samples that will just be thrown away, making this much more efficient than the naive three-step approach.
Quantization of sampled signals
Quantization maps continuous amplitude values to a finite set of discrete levels. After sampling gives you discrete time, quantization gives you discrete amplitude. Together, they produce a fully digital signal.
Uniform vs non-uniform quantization
Uniform quantization divides the full input range into equally spaced intervals. Each interval maps to a single output level. The step size (quantization interval) is:
where is the number of bits.
Non-uniform quantization uses smaller step sizes where the signal spends most of its time (typically near zero for speech and audio) and larger step sizes elsewhere. This improves the signal-to-noise ratio for low-amplitude signals without increasing the bit depth.
Non-uniform quantization is typically implemented via companding: compress the signal with a nonlinear function, apply uniform quantization, then expand on reconstruction. The two standard companding laws are:
- -law (North America, Japan): , typically with
- A-law (Europe, most other regions): a piecewise function that behaves similarly but with different characteristics near zero
Quantization noise
The difference between the original continuous value and the quantized value is the quantization error. For a uniform quantizer with step size , this error is bounded by (for rounding) or (for truncation).
Under the standard additive noise model, quantization error is treated as white noise, uniformly distributed over , with variance:
This model works well when the signal is complex enough relative to the step size that the error behaves like random noise. It breaks down for very coarse quantization or highly correlated signals, where the error becomes signal-dependent and can produce audible or visible artifacts.
Signal-to-quantization-noise ratio (SQNR)
SQNR quantifies how much the signal power exceeds the quantization noise power. For a uniform quantizer with bits and a full-scale sinusoidal input:
Each additional bit gives you roughly 6 dB of improvement. For example:
- 8-bit: ~49.9 dB
- 16-bit: ~98.1 dB
- 24-bit: ~146.2 dB
This formula assumes the input is a full-scale sine wave. For other signal distributions or signals that don't use the full dynamic range, the effective SQNR will be lower. This is one reason why non-uniform quantization or floating-point representations are preferred for signals with large dynamic range.
Dithering for quantization noise reduction
Dithering adds a small amount of noise to the signal before quantization. This sounds counterintuitive, but it decorrelates the quantization error from the signal, converting deterministic distortion (harmonic artifacts) into broadband noise, which is perceptually much less objectionable.
Types of dither:
- Rectangular (RPDF) dither: uniform noise with amplitude . Eliminates signal-dependent distortion but leaves some noise modulation.
- Triangular (TPDF) dither: the sum of two rectangular dither signals, with amplitude . Eliminates both distortion and noise modulation. This is the standard choice for most audio applications.
- Noise-shaped dither: applies a filter to the dither or the quantization error feedback, pushing noise energy into frequency bands where it's less perceptible (e.g., above 15 kHz for audio).
Pulse code modulation (PCM)
PCM is the standard method for digitally representing analog signals. It combines sampling, quantization, and binary encoding into a single framework. Nearly all uncompressed digital audio (CD, WAV, telephony) uses PCM.
PCM encoding vs decoding
Encoding (analog to digital):
- Sample the analog signal at rate .
- Quantize each sample to one of levels.
- Encode each quantized level as an -bit binary word.
Decoding (digital to analog):
- Convert each -bit word back to its quantized amplitude value.
- Output these values at the original sample rate (producing a staircase waveform).
- Apply a reconstruction (low-pass) filter to smooth the output and recover the analog signal.

PCM bit rate vs quantization levels
The bit rate of a PCM stream is:
For CD-quality audio: Hz, bits, so bits/s per channel (about 1.41 Mbps for stereo).
Higher bit depth means better SQNR but proportionally higher data rates and storage requirements. Common formats:
- 8-bit: ~48 dB SQNR, used in telephony
- 16-bit: ~96 dB SQNR, CD-quality audio
- 24-bit: ~144 dB SQNR, professional audio recording
- 32-bit (float): used in audio processing chains for headroom, not typically for final storage
Companding in PCM
Companding in PCM systems applies non-uniform quantization to improve performance for signals with large dynamic range, particularly speech.
Speech signals spend most of their time at low amplitudes. With uniform quantization, these low-level portions get poor SQNR. Companding solves this by:
- Compressing the signal's dynamic range before uniform quantization (more levels allocated to small amplitudes).
- Expanding the signal after decoding to restore the original dynamic range.
The -law and A-law companders are standardized in ITU-T G.711 for telephony. Both achieve roughly 12-bit uniform quantization quality using only 8 bits.
Oversampling techniques
Oversampling means sampling well above the Nyquist rate, typically by factors of 4x to 256x. The extra samples don't carry new information about the signal, but they provide significant practical advantages in converter design.
Oversampling ADC vs Nyquist-rate ADC
The key benefits of oversampling:
- Relaxed anti-aliasing filter requirements. With an oversampling ratio (OSR) of, say, 64x, the transition band of the anti-aliasing filter can be very wide. A simple, low-order analog filter suffices, compared to the steep, high-order filter a Nyquist-rate ADC demands.
- Improved SNR through noise spreading. Quantization noise power stays the same, but it's spread across a bandwidth of instead of . After digital low-pass filtering to the signal bandwidth, you keep only a fraction of the total noise. Each doubling of OSR gives about 3 dB of SNR improvement (0.5 bits of resolution) even without noise shaping.
- Trade-off: speed for resolution. Oversampling ADCs use simple, low-resolution quantizers (often just 1 bit) running at very high speeds, then rely on digital filtering and decimation to achieve high effective resolution.
Sigma-delta modulation
Sigma-delta () modulation is the dominant architecture for high-resolution oversampling converters. Its structure:
- Compute the difference between the input signal and the feedback signal (the "delta").
- Integrate (accumulate) this difference (the "sigma").
- Quantize the integrator output with a coarse quantizer (often 1-bit).
- Feed the quantized output back through a DAC to close the loop.
The feedback loop forces the quantizer output to track the input on average. The integration step shapes the quantization error: instead of being white, the noise is pushed to higher frequencies (first-order noise shaping). Higher-order modulators use multiple integrator stages to push noise even more aggressively out of the signal band.
Noise shaping in oversampling
Noise shaping is what makes sigma-delta converters so effective. The loop filter in the modulator acts as a highpass filter on the quantization noise while passing the signal through with unity gain.
For an -th order modulator, the noise transfer function behaves approximately as , which means the in-band noise power decreases dramatically with both the modulator order and the OSR. Specifically, for an -th order modulator with oversampling ratio :
where is the quantizer resolution (often 1 bit).
Higher-order noise shaping gives dramatic SNR improvements per doubling of OSR, but stability becomes a concern for orders above 2. Practical high-order modulators use multi-stage (MASH) architectures or carefully designed feedback coefficients to maintain stability.
Practical considerations
Real implementations of sampling and quantization face constraints that ideal theory ignores. These effects can significantly degrade system performance if not accounted for.
Finite word length effects
Digital systems represent numbers with a fixed number of bits, and this limited precision introduces several problems:
- Coefficient quantization: when filter coefficients are rounded to fit the available word length, the actual frequency response deviates from the designed one. Pole and zero locations shift, which can increase passband ripple, reduce stopband attenuation, or even push poles outside the unit circle (causing instability in IIR filters).
- Roundoff noise: arithmetic operations produce results that must be truncated or rounded to fit the word length. In feedforward (FIR) structures, this noise is relatively benign. In recursive (IIR) structures, roundoff errors recirculate through feedback paths and can accumulate, sometimes producing limit cycles (persistent low-level oscillations even with zero input).
- Overflow: if intermediate computations exceed the representable range, the result wraps around or saturates, producing large errors. Scaling strategies and saturation arithmetic help mitigate this.
These effects are more severe in fixed-point implementations than in floating-point, but floating-point has its own precision limitations at extreme dynamic ranges.
Computational complexity of sampling and quantization
- Oversampling converters trade analog complexity for digital complexity. A sigma-delta ADC with 128x OSR needs digital decimation filters running at very high clock rates.
- Polyphase filter structures reduce the cost of interpolation and decimation by computing only the output samples you actually need.
- Dithering and noise shaping add modest computational overhead but can be critical for meeting performance targets in audio and measurement applications.
- For real-time and power-constrained systems (embedded devices, battery-powered sensors), the choice between Nyquist-rate and oversampling architectures often comes down to the power budget and available silicon area.
Hardware implementation of sampling and quantization
ADC and DAC architecture selection depends on the application:
| Architecture | Typical Speed | Typical Resolution | Common Applications |
|---|---|---|---|
| Flash | Very high (GHz) | Low (6-8 bits) | Oscilloscopes, radar |
| Successive Approximation (SAR) | Medium (1-100 MHz) | Medium-High (10-18 bits) | Data acquisition, sensor interfaces |
| Pipelined | High (10-500 MHz) | Medium (8-16 bits) | Communications, video |
| Sigma-Delta | Low-Medium (kHz-MHz) | Very High (16-24+ bits) | Audio, precision measurement |
| Beyond the converter itself, several circuit-level impairments affect performance: |
- Clock jitter: timing uncertainty in the sampling clock adds noise proportional to the signal's slew rate. For high-frequency, high-resolution applications, jitter requirements can be extremely tight (sub-picosecond).
- Thermal noise: sets a fundamental floor on achievable SNR, independent of quantization.
- Reconstruction filters: on the DAC side, these smooth the staircase output. Like anti-aliasing filters, they must be designed to match the system's frequency and distortion requirements.
Careful PCB layout, power supply decoupling, and clock distribution are essential for achieving the converter's specified performance in a real system.