Fiveable

🌍Geophysics Unit 2 Review

QR code for Geophysics practice questions

2.2 Seismic instrumentation and data acquisition

2.2 Seismic instrumentation and data acquisition

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🌍Geophysics
Unit & Topic Study Guides

Seismometer Components and Functioning

Seismometers are the foundation of observational seismology. They translate ground motion into electrical signals that can be recorded, stored, and analyzed. Understanding how they work, and where they fall short, is essential for interpreting any seismic dataset.

Seismometer Design and Components

A seismometer has three core components:

  1. Sensor — detects ground motion. This is either a mass-spring system (a mass suspended on springs that stays relatively still while the frame moves with the ground) or a force-balance system (uses a feedback loop to actively hold the mass stationary relative to the ground, then measures the force required to do so).
  2. Damping mechanism — suppresses unwanted oscillations near the instrument's resonant frequency. Common approaches include oil damping and electromagnetic damping. Without adequate damping, the sensor would "ring" after an impulse and distort the recorded waveform.
  3. Transducer — converts the mechanical motion (or the feedback signal in force-balance designs) into an electrical signal for recording.

Force-balance sensors are standard in modern broadband instruments because the feedback design gives them a much flatter and wider frequency response than a purely passive mass-spring system.

Broadband Seismometers and Their Advantages

Traditional short-period seismometers are tuned to a narrow frequency band (typically around 1 Hz). Broadband seismometers overcome this limitation by recording across a wide frequency range, from long-period surface waves (periods of hundreds of seconds) down to short-period P and S waves (frequencies of tens of Hz).

Why this matters:

  • They can detect smaller ground motions thanks to higher sensitivity across the spectrum.
  • A single instrument can capture both local microseismic events and large teleseismic earthquakes (events thousands of kilometers away).
  • Waveform analysis in both the time domain and frequency domain becomes possible from one record, giving you information about Earth structure and source properties simultaneously.

Seismic Arrays and Their Applications

A seismic array is a network of seismometers deployed in a deliberate geometric pattern. The primary goal is to improve the signal-to-noise ratio (SNR) beyond what any single station can achieve.

Common array geometries:

  • Linear arrays — best for studying wave propagation along a specific azimuth.
  • Circular arrays — provide omnidirectional sensitivity.
  • Grid (2D) arrays — offer high spatial resolution for imaging 3D subsurface structure.

Key array processing techniques:

  • Beamforming — signals from all stations are summed with calculated time delays so that energy arriving from a target direction adds constructively, while noise from other directions cancels out. Think of it as "steering" the array's sensitivity toward a particular source.
  • Frequency-wavenumber (f-k) analysis — transforms the data into the frequency-wavenumber domain, where different seismic phases separate by their apparent velocity and propagation direction. This is especially useful for distinguishing overlapping arrivals that look similar in the time domain.

Digital Seismic Data Acquisition

Nearly all modern seismic recording is digital. The analog electrical signal from the seismometer is sampled, quantized, and stored as discrete numerical values. The quality of this digitization process directly controls what you can and cannot resolve in the data.

Analog-to-Digital Conversion and Sampling

An analog-to-digital converter (ADC) samples the continuous sensor output at regular time intervals.

The Nyquist-Shannon sampling theorem sets the fundamental rule: the sampling rate must be at least twice the highest frequency you want to record. If you violate this, you get aliasing, where high-frequency energy folds back and masquerades as lower-frequency signal. There's no way to fix aliased data after the fact.

For example, if your highest frequency of interest is 50 Hz, you need a sampling rate of at least 100 samples per second (100 Hz). In practice, oversampling (sampling well above the Nyquist rate) is common because it improves SNR and reduces the effects of quantization noise.

Dynamic Range and Seismic Data Formats

Dynamic range is the ratio between the largest and smallest amplitudes the system can faithfully record, expressed in decibels (dB) or bits.

  • Modern systems typically use 24-bit ADCs, which provide roughly 144 dB of dynamic range. This is enough to record both the faint background hum of the Earth and the strong near-field signal of a moderate earthquake without clipping or losing small-amplitude detail.

Standard data formats:

  • SEED (Standard for the Exchange of Earthquake Data) and its lighter variant miniSEED are the community standards. They bundle waveform data with metadata (station coordinates, instrument response, timing).
  • Steim compression is a lossless algorithm commonly applied to miniSEED data. It exploits the sample-to-sample similarity in seismic traces to achieve high compression ratios without discarding any information.

These standardized formats are what make it possible for researchers worldwide to share and archive data through repositories like IRIS.

Seismometer Design and Components, File:Mass spring damper.svg - Wikimedia Commons

Seismic Data Processing Techniques

Raw seismic records contain the signal you want plus instrument effects, Earth attenuation, and noise. Processing strips away the unwanted parts and enhances the useful information.

Filtering and Deconvolution

Filtering selectively passes or rejects certain frequency bands:

Filter TypeWhat It Does
Low-passRemoves high-frequency noise
High-passRemoves low-frequency noise
Band-passIsolates a specific frequency range
Band-reject (notch)Removes a specific frequency range
Filters can be applied in the time domain (via convolution) or in the frequency domain (via Fourier transforms). The choice often depends on computational convenience and the filter design.

Deconvolution goes a step further. Every seismometer has an instrument response, a transfer function describing how it converts true ground motion into voltage. Deconvolution removes this response (and optionally corrects for Earth attenuation) to recover the actual ground displacement, velocity, or acceleration. It can also compress the seismic wavelet, improving temporal resolution so you can distinguish closely spaced arrivals or reflectors.

The instrument response is determined through calibration experiments and is stored in the metadata alongside the waveform data.

Advanced Signal Analysis Techniques

Spectral analysis using the Fourier transform decomposes a time-domain signal into its constituent frequencies. This reveals the power spectrum (which frequencies carry the most energy) and phase information.

Polarization analysis applies to three-component recordings (vertical + two horizontal). It determines the direction of particle motion, which lets you separate wave types:

  • P waves — linear particle motion parallel to the propagation direction.
  • SV waves — particle motion perpendicular to propagation, in the vertical plane.
  • SH waves — particle motion perpendicular to propagation, in the horizontal plane.

Seismic attribute analysis extracts derived quantities from the waveform to highlight subsurface features:

  • Instantaneous amplitude — relates to signal energy; useful for spotting high-amplitude reflectors or lithology changes.
  • Instantaneous phase — tracks wavefront continuity and reveals phase shifts or discontinuities.
  • Instantaneous frequency — identifies changes in frequency content that may indicate variations in rock properties or the presence of fluids (including hydrocarbons in exploration contexts).

Seismic Instrumentation Limitations

Even the best instruments introduce artifacts. Recognizing and mitigating these limitations is just as important as understanding the instruments themselves.

Noise Sources and Their Mitigation

Instrument self-noise comes from electronic components and thermal fluctuations inside the seismometer. It sets a floor below which real ground motion cannot be detected. Low-noise instruments minimize this through high-quality electronics, electromagnetic shielding, and temperature-compensated circuitry.

Site noise comes from the environment:

  • Wind — pressure fluctuations tilt and vibrate the sensor. Mitigation: wind shields, or array-based coherent noise cancellation.
  • Ocean microseisms — generated by ocean wave interactions, these produce a persistent noise peak near 0.1–0.3 Hz. Mitigation: seafloor installations or pressure-sensor corrections for the water column.
  • Cultural noise — traffic, machinery, footsteps. Mitigation: install stations in underground vaults, boreholes, or remote locations away from human activity.

Proper site selection is often the single most effective way to reduce noise.

Timing Errors and Quantization Noise

Clock drift in the data acquisition system causes timing errors that propagate directly into event location estimates. GPS synchronization and regular clock corrections keep drift to microsecond levels.

Quantization noise arises because the ADC rounds continuous voltage values to the nearest discrete level. Higher-resolution ADCs (24-bit vs. 16-bit) reduce this noise substantially. An additional technique called dithering adds a tiny amount of random noise to the analog signal before digitization. This sounds counterintuitive, but it randomizes the quantization error and effectively improves the ADC's resolution for small signals.

Metadata Management and Quality Control

Even perfect waveform data becomes useless if the metadata is wrong. Incorrect instrument response parameters, station coordinates, or timing information will propagate errors through every downstream analysis step.

Best practices for metadata and quality control:

  • Review and update metadata whenever instrumentation or station configuration changes.
  • Use automated QC tools to flag gaps, spikes, abnormal amplitudes, or timing jumps in continuous data streams.
  • Supplement automated checks with visual inspection of waveforms and spectrograms to catch subtle artifacts that algorithms may miss.

Rigorous metadata management isn't glamorous, but it's what separates reliable seismic datasets from unreliable ones.