Fiveable

🌍Geophysics Unit 9 Review

QR code for Geophysics practice questions

9.1 Digital signal processing techniques

9.1 Digital signal processing techniques

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🌍Geophysics
Unit & Topic Study Guides

Digital Signal Processing for Geophysical Data

Digital signal processing (DSP) is how geophysicists turn raw field recordings into interpretable data. Geophysical measurements are always contaminated by noise, whether from instruments, environmental sources, or the Earth itself. DSP provides the mathematical tools to separate signal from noise, resolve subsurface features, and extract quantitative information from seismic, gravity, magnetic, and electromagnetic datasets.

This section covers the core DSP techniques you'll need: sampling theory, digital filtering, and windowing/tapering methods.

Overview of DSP in Geophysics

Most geophysical instruments record analog signals that are then digitized for computer processing. Once in digital form, you can apply a range of techniques to clean up and analyze the data.

The most common DSP techniques in geophysics include:

  • Filtering to remove noise or isolate frequency bands of interest
  • Convolution and correlation to compare signals or model system responses
  • Fourier analysis to decompose signals into their frequency components
  • Wavelet analysis to examine how frequency content changes over time

Which technique you choose depends on the data type, the noise characteristics, and what you're trying to extract. A seismic reflection survey calls for different processing than a magnetotelluric time series, for example. In practice, most processing workflows are implemented in MATLAB or Python using libraries like ObsPy or SciPy.

Sampling, Aliasing, and Nyquist Frequency

The Sampling Process

Sampling converts a continuous analog signal into a discrete digital signal by measuring the signal's amplitude at regular time intervals. The sampling rate (or sampling frequency), measured in hertz (Hz), is the number of samples taken per second.

The critical concept here is the Nyquist frequency, which equals half the sampling rate:

fNyquist=fs2f_{Nyquist} = \frac{f_s}{2}

where fsf_s is the sampling rate. The Nyquist frequency is the highest frequency that can be accurately represented in the digital signal.

The Nyquist-Shannon sampling theorem states that to faithfully capture a signal, you must sample at a rate at least twice the highest frequency component present in that signal:

fs2fmaxf_s \geq 2 f_{max}

For example, if a seismic signal contains frequencies up to 125 Hz, you need a sampling rate of at least 250 Hz (a 4 ms sample interval).

Oversampling (sampling faster than required) improves signal quality and gives you a safety margin, but it increases data volume and processing time. In practice, geophysicists typically sample at 3 to 4 times the maximum frequency of interest.

Overview of Digital Signal Processing (DSP) in Geophysics, HESS - Automatic identification of alternating morphological units in river channels using ...

Aliasing and Its Effects

Aliasing occurs when the sampling rate is too low to capture the highest frequency components in the signal. The result is that high-frequency energy gets "folded back" into lower frequencies, creating false spectral content that's indistinguishable from real data.

This is a serious problem because aliased signals can't be removed after digitization. Once the data is sampled, the damage is done.

To prevent aliasing:

  1. Determine the highest frequency present in your analog signal
  2. Set the sampling rate to at least twice that frequency (and preferably higher)
  3. Apply an anti-aliasing filter (an analog low-pass filter) before digitization to remove any frequency content above the Nyquist frequency

The anti-aliasing filter is applied in the analog domain, before the signal reaches the analog-to-digital converter. This is the only reliable way to prevent aliasing, since you can't fix it in post-processing.

Digital Filters for Signal Enhancement

Types of Digital Filters

Digital filters selectively modify the frequency content of a signal. There are four basic filter types, each defined by which frequencies they pass or reject:

  • Low-pass filter: Passes frequencies below a cutoff and attenuates higher frequencies. Use this to remove high-frequency noise (e.g., instrument chatter in a gravity survey).
  • High-pass filter: Passes frequencies above a cutoff and attenuates lower frequencies. Useful for removing slow drift or regional trends from data.
  • Band-pass filter: Passes only a specified frequency range and attenuates everything outside it. Common in seismic processing where you want to isolate the reflection signal band (say, 10–80 Hz).
  • Notch filter: Rejects a very narrow frequency band. The classic application is removing power line interference at 50 Hz or 60 Hz.

When designing any filter, you need to specify:

  1. The filter type (low-pass, high-pass, band-pass, or notch)
  2. The cutoff frequency (or frequencies, for band-pass and notch filters)
  3. The filter order, which controls how sharply the filter transitions between passband and stopband

Higher-order filters give sharper cutoffs but can introduce other issues like ringing or phase distortion.

Overview of Digital Signal Processing (DSP) in Geophysics, GI - Multiresolution wavelet analysis applied to GRACE range-rate residuals

FIR vs. IIR Filters

Digital filters fall into two broad categories based on their mathematical structure:

Finite Impulse Response (FIR) filters:

  • Their output depends only on current and past input values
  • Always stable, which makes them reliable
  • Can be designed with linear phase, meaning they don't distort the shape of waveforms (just delay them uniformly). This is important in seismic processing where waveform shape carries information.
  • The tradeoff: achieving a sharp cutoff requires many filter coefficients, which increases computation

Infinite Impulse Response (IIR) filters:

  • Their output depends on both input values and previous output values (feedback)
  • Achieve sharp frequency cutoffs with far fewer coefficients than FIR filters
  • Can be unstable if poorly designed, and they introduce nonlinear phase distortion, which alters waveform shapes
  • Common IIR designs include Butterworth (maximally flat passband), Chebyshev (sharper cutoff with passband ripple), and Bessel (best phase response)

Choose FIR filters when waveform fidelity and phase preservation matter (e.g., seismic reflection data). Choose IIR filters when computational efficiency is the priority and some phase distortion is acceptable.

You can evaluate filter performance by examining the frequency response (what frequencies pass or get attenuated), the impulse response (how the filter reacts to a spike input), and the phase response (how much phase shift the filter introduces at each frequency).

Windowing and Tapering Effects on Data

Windowing Techniques

In practice, you never process an infinitely long signal. You always select a finite segment of data for analysis. Windowing is the process of multiplying your data by a window function that defines which portion of the signal you're analyzing.

The simplest window is the rectangular window, which just chops out a segment with sharp edges. The problem is that these abrupt edges create artificial discontinuities, which produce spectral leakage: spurious frequency content that spreads energy from true spectral peaks into neighboring frequencies.

To reduce spectral leakage, you can use tapered window functions that smoothly roll off toward zero at the edges:

  • Hamming window: Good reduction of the nearest sidelobe; widely used as a general-purpose window
  • Hanning (Hann) window: Similar to Hamming but with slightly different sidelobe behavior; goes exactly to zero at the edges
  • Blackman window: Stronger sidelobe suppression than Hamming or Hanning, but at the cost of a wider main lobe (reduced frequency resolution)

There's a fundamental tradeoff here: windows that suppress spectral leakage more aggressively also broaden spectral peaks, reducing your ability to distinguish closely spaced frequencies. You're always balancing spectral resolution against spectral leakage.

You can examine how windowing affects your data using the short-time Fourier transform (STFT), which applies the Fourier transform to successive windowed segments, or the continuous wavelet transform (CWT), which provides variable time-frequency resolution.

Tapering and Overlapping Windows

Tapering is the gradual reduction of signal amplitude at the edges of a data window. Common taper shapes include cosine tapers (also called Tukey windows) and Gaussian tapers. Tapering serves the same purpose as using a non-rectangular window function: it minimizes edge discontinuities and the resulting spectral leakage.

When computing spectral estimates from long time series, it's common to divide the data into overlapping segments. The Welch method is a standard approach:

  1. Divide the time series into segments of a chosen length
  2. Apply a window function to each segment
  3. Compute the power spectrum of each windowed segment
  4. Average the spectra across all segments

Using overlapping windows (typically 50% overlap) means that data near the tapered edges of one window falls near the center of the adjacent window, so no part of the signal is underweighted. This reduces the variance of the spectral estimate, giving you a smoother, more reliable spectrum.

The tradeoff with overlap percentage and window length is straightforward: longer windows give better frequency resolution, shorter windows give better time localization, and more overlap gives smoother estimates at the cost of increased computation.