Fiveable

📚Signal Processing Unit 6 Review

QR code for Signal Processing practice questions

6.1 Sampling Process and Reconstruction

6.1 Sampling Process and Reconstruction

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
📚Signal Processing
Unit & Topic Study Guides

Sampling in Digital Signal Processing

Sampling and reconstruction form the bridge between the analog world and digital systems. Every time a microphone captures audio or a sensor reads a temperature, a continuous-time signal gets converted into discrete samples that a computer can process. Getting this conversion right (and reversing it accurately) depends on understanding a few core principles.

Sampling Process and Nyquist-Shannon Theorem

Sampling converts a continuous-time signal into a discrete-time signal by measuring the signal's amplitude at evenly spaced points in time. The spacing between measurements is the sampling period TsT_s, and its reciprocal is the sampling frequency (or sampling rate):

fs=1Tsf_s = \frac{1}{T_s}

The sampling rate is measured in samples per second (Hz). A sampling rate of 44,100 Hz, for example, means the signal is measured 44,100 times every second.

The Nyquist-Shannon sampling theorem gives the fundamental rule: a bandlimited continuous-time signal can be perfectly reconstructed from its samples only if the sampling rate is at least twice the signal's highest frequency component. That minimum rate is called the Nyquist rate:

fs2fmaxf_s \geq 2 f_{\max}

So if a signal's highest frequency component is 4 kHz, you need a sampling rate of at least 8 kHz.

What happens when you don't meet this requirement?

  • Undersampling (sampling below the Nyquist rate) causes aliasing. High-frequency components fold back into the lower frequency range and masquerade as frequencies that weren't in the original signal. This distortion is irreversible once it occurs.
  • Oversampling (sampling above the Nyquist rate) gives you more data than strictly necessary, but it provides a safety margin. It also relaxes the design requirements for anti-aliasing filters, since the transition band between the passband and the folding frequency becomes wider.

Sampling Function and Quantization

Mathematically, sampling can be modeled as multiplying the continuous-time signal x(t)x(t) by a Dirac comb (an infinite train of unit impulses spaced TsT_s apart):

xs(t)=x(t)n=δ(tnTs)x_s(t) = x(t) \cdot \sum_{n=-\infty}^{\infty} \delta(t - nT_s)

This produces a sequence of weighted impulses whose amplitudes equal the signal's value at each sample instant.

Quantization is the second step. It maps each sample's continuous amplitude to the nearest value in a finite set of discrete levels, typically encoded as binary numbers.

  • The number of quantization levels is 2b2^b, where bb is the number of bits per sample. An 8-bit system has 256 levels; a 16-bit system has 65,536.
  • More levels means finer resolution and a closer match to the original amplitude.
  • The tradeoff: more bits per sample means more data to store and transmit.

Quantization always introduces quantization noise, which is the difference between the true sample value and the nearest quantized level. For a uniform quantizer with step size Δ\Delta, the quantization noise power is approximately:

σq2=Δ212\sigma_q^2 = \frac{\Delta^2}{12}

This noise sets a floor on how accurately the digital representation can capture the original signal.

Continuous-to-Discrete Signal Conversion

Conversion Steps

Converting an analog signal to a digital one involves three stages in practice:

  1. Anti-aliasing filtering. A low-pass filter removes (or attenuates) any frequency components above fs/2f_s/2 before sampling occurs. This ensures the signal is bandlimited so the Nyquist condition is satisfied.
  2. Sampling. The filtered signal is measured at uniform intervals of TsT_s seconds, producing a sequence of discrete-time samples.
  3. Quantization and encoding. Each sample's amplitude is rounded to the nearest quantization level and represented as a binary number.
Sampling Process and Nyquist-Shannon Theorem, Nyquist frequency - Wikipedia

Conversion Considerations

  • The sampling rate must satisfy fs2fmaxf_s \geq 2 f_{\max}. In practice, engineers choose a rate somewhat higher than the theoretical minimum to account for non-ideal filter rolloff.
  • The anti-aliasing filter is critical. Without it, any energy above fs/2f_s/2 in the original signal will alias into the baseband, and no amount of processing after sampling can undo that.
  • Increasing the number of quantization bits improves signal-to-noise ratio (roughly 6 dB per additional bit for uniform quantization) but increases storage and bandwidth requirements.

Discrete-to-Continuous Signal Reconstruction

Ideal Reconstruction Process

Reconstruction reverses the sampling process: it takes the discrete samples and produces a continuous-time signal. The goal is to recover the original analog signal as closely as possible.

The ideal reconstruction method works as follows:

  1. Treat each sample as a weighted impulse at its corresponding time instant.
  2. Pass this impulse train through an ideal low-pass filter with a cutoff frequency of fs/2f_s/2 (the Nyquist frequency) and a gain of TsT_s.
  3. The filter's output is the reconstructed continuous-time signal.

The ideal low-pass filter has a perfectly rectangular frequency response: it passes all frequencies below fs/2f_s/2 with no distortion and completely blocks everything above. Its impulse response is the sinc function:

h(t)=sinc(tTs)=sin(πt/Ts)πt/Tsh(t) = \text{sinc}\left(\frac{t}{T_s}\right) = \frac{\sin(\pi t / T_s)}{\pi t / T_s}

The reconstructed signal is the convolution of the sampled signal with this sinc function:

xr(t)=n=x[n]sinc(tnTsTs)x_r(t) = \sum_{n=-\infty}^{\infty} x[n] \cdot \text{sinc}\left(\frac{t - nT_s}{T_s}\right)

If the Nyquist condition was met during sampling, xr(t)=x(t)x_r(t) = x(t) exactly. Each sample contributes a shifted, scaled sinc, and they all sum to perfectly fill in the signal between samples.

Practical Reconstruction Filters

The ideal low-pass filter can't be built in hardware for two reasons: its impulse response is infinite in duration and non-causal (it extends into negative time). Real reconstruction filters approximate the ideal with some compromises:

  • A transition band between the passband and stopband, rather than a sharp cutoff
  • Passband ripple, meaning small amplitude variations in the frequencies that should pass through unchanged
  • Finite stopband attenuation, meaning frequencies above the cutoff are reduced but not completely eliminated

Common practical approaches include zero-order hold (staircase output from a DAC, followed by a smoothing filter) and higher-order interpolation filters. The closer the filter approximates the ideal rectangular response, the more accurate the reconstruction.

Sampling Process and Nyquist-Shannon Theorem, Category:Nyquist Shannon theorem – Wikimedia Commons

Reconstruction Accuracy Factors

Sampling Rate and Aliasing

The sampling rate is the single most important factor in reconstruction quality. If the Nyquist condition is violated, aliased components corrupt the spectrum permanently. No reconstruction filter can separate the aliased frequencies from the legitimate ones, because they occupy the same spectral locations.

For example, if you sample a 6 kHz tone at only 8 kHz (Nyquist rate = 8 kHz, but the signal frequency exceeds fs/2=4f_s/2 = 4 kHz), the 6 kHz component aliases to 2 kHz in the sampled data. The reconstructed signal will contain a 2 kHz tone that was never in the original.

Quantization Noise and Filter Characteristics

Even when the sampling rate is sufficient, two other factors limit reconstruction accuracy:

  • Quantization noise adds a low-level error signal across the spectrum. With fewer bits, this noise becomes more noticeable. For audio, 16-bit quantization yields a signal-to-quantization-noise ratio of about 96 dB, which is sufficient for most applications. Reducing to 8 bits drops that to roughly 48 dB.
  • Reconstruction filter imperfections introduce their own distortions. Passband ripple causes amplitude variations in the reconstructed signal's frequency content. A wide transition band means some out-of-band energy leaks through, and some near-band energy gets attenuated.

Longer filter impulse responses (higher-order filters) generally provide sharper cutoffs and better stopband rejection, but they add latency and computational cost.

Noise and Interference

Any noise or interference present in the discrete-time signal gets processed by the reconstruction filter along with the desired signal. The filter may amplify noise at certain frequencies or fail to attenuate interference that falls within the passband.

Techniques for mitigating these effects include:

  • Oversampling combined with digital filtering, which spreads quantization noise over a wider bandwidth and then filters most of it out
  • Adaptive filtering, which can track and cancel time-varying interference
  • Wavelet denoising, which exploits the time-frequency structure of the signal to separate it from noise

The best reconstruction results come from meeting the Nyquist criterion with margin, using sufficient quantization depth, and choosing a reconstruction filter matched to the application's accuracy and latency requirements.