Fiveable

🌍Geophysics Unit 11 Review

QR code for Geophysics practice questions

11.1 Seismic hazard assessment and earthquake prediction

11.1 Seismic hazard assessment and earthquake prediction

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🌍Geophysics
Unit & Topic Study Guides

Seismic hazard assessment and earthquake prediction are central to understanding and reducing earthquake risks. These methods draw on statistical analysis, historical data, and geological evidence to estimate the likelihood and potential impact of future earthquakes at specific sites.

Short-term earthquake prediction remains one of the hardest problems in geophysics, but long-term forecasting is well-established and directly shapes building codes, land-use planning, and emergency preparedness. Seismic hazard maps translate all of this analysis into practical tools that guide decision-making in earthquake-prone regions.

Probabilistic Seismic Hazard Assessment

Key Components and Methods

Probabilistic Seismic Hazard Assessment (PSHA) is a statistical framework that quantifies the probability of exceeding a certain ground motion level at a specific site over a given time period. It's the backbone of modern seismic hazard analysis, and it combines several components into a single, integrated result.

PSHA follows a general workflow:

  1. Seismic source characterization — Identify and model potential earthquake sources, including their geometry (fault length, dip, depth), maximum magnitude (MmaxM_{max}), and recurrence rates.
  2. Ground motion prediction equations (GMPEs) — Apply attenuation relationships that describe how ground motion decreases with distance from the source. GMPEs account for magnitude, source-to-site distance, and local site conditions (soil type, bedrock depth, basin effects).
  3. Hazard curve calculation — For each site, compute the annual probability of exceeding various levels of ground motion (typically peak ground acceleration, PGA, or spectral acceleration, SaS_a).
  4. Hazard map generation — Display the spatial distribution of ground motion levels for a chosen probability of exceedance. Common reference levels are 2% probability of exceedance in 50 years (roughly a 2,475-year return period) and 10% in 50 years (475-year return period).

The distinction between hazard curves and hazard maps matters: a hazard curve shows the full range of exceedance probabilities for a single site, while a hazard map shows one exceedance probability across an entire region.

Accounting for Uncertainties

A major strength of PSHA is its formal treatment of uncertainty. Two types matter here:

  • Epistemic uncertainty arises from incomplete knowledge. Examples include poorly constrained fault geometries, uncertain MmaxM_{max} estimates, and limited strong-motion datasets used to calibrate GMPEs.
  • Aleatory variability reflects the natural randomness in earthquake processes, such as the scatter in recorded ground motions even for events of similar magnitude and distance.

Logic trees are the standard tool for handling epistemic uncertainty. Each branch of the tree represents an alternative model or parameter choice (e.g., different GMPEs, different fault slip rates), and each branch is assigned a weight reflecting its relative credibility. The final hazard result integrates across all branches, capturing the full range of possible outcomes rather than relying on a single "best guess."

By combining logic trees with the inherent aleatory variability in GMPEs, PSHA produces hazard estimates that are more robust and transparent about what we know and what we don't.

Historical Seismicity Data for Hazard Assessment

Key Components and Methods, NHESS - Probabilistic seismic hazard analysis using the logic tree approach – Patna district (India)

Sources and Applications of Historical Data

Reliable earthquake records are the foundation of any hazard assessment. These records come from two main sources:

  • Instrumental data, recorded by seismometers since the late 19th century (and with good global coverage since the mid-20th century), provide precise locations, magnitudes, depths, and focal mechanisms.
  • Historical accounts, including written records, newspaper reports, and personal diaries, extend the record further back in time. These are especially valuable in regions with long written histories (e.g., China, the Mediterranean, Japan) but require careful calibration to assign approximate magnitudes and locations.

Both types of data feed into earthquake catalogs, which compile events for a region and form the basis for estimating seismicity rates. A key relationship derived from these catalogs is the Gutenberg-Richter law:

log10N=abM\log_{10} N = a - bM

where NN is the number of earthquakes with magnitude M\geq M, aa describes the overall rate of seismicity, and bb (typically close to 1.0) describes the relative proportion of large versus small events. This relationship is fundamental to estimating recurrence rates in PSHA.

Paleoseismic Evidence

Instrumental and historical records cover only a few hundred years at best, which is far too short to capture the full behavior of faults with recurrence intervals of thousands of years. Paleoseismology extends the earthquake record into the geologic past using physical evidence preserved in the landscape and subsurface.

Key methods include:

  • Fault trenching — Excavating across an active fault to expose displaced and deformed sedimentary layers. Each disrupted horizon can represent a past earthquake, and dating techniques (radiocarbon, optically stimulated luminescence) constrain when each event occurred.
  • Liquefaction features — Sand blows and sand dikes form when strong shaking causes saturated, loose sediments to lose strength and erupt to the surface. Finding and dating these features in the stratigraphic record provides evidence of past strong shaking even where no surface fault rupture occurred.
  • Offset geomorphic features — Displaced stream channels, offset terrace risers, and warped shorelines record cumulative fault slip. Dating these offsets with radiocarbon or cosmogenic nuclide methods (e.g., 10Be^{10}Be, 26Al^{26}Al) yields slip rates and recurrence intervals.

Paleoseismic data are critical for constraining MmaxM_{max} and recurrence intervals on specific faults, directly feeding into the seismic source characterization step of PSHA.

Limitations of Earthquake Prediction

Key Components and Methods, NHESS - Revisiting seismic hazard assessment for Peninsular Malaysia using deterministic and ...

Challenges in Short-term Prediction

There's an important distinction to keep straight: prediction means specifying the location, time window, and magnitude of a future earthquake with enough precision to be actionable. Forecasting means providing probabilities of future seismic events over a broader time window and area.

Short-term prediction remains largely unachieved, for several reasons:

  • The physics of earthquake nucleation (how a rupture initiates on a fault) is still not well understood. We don't have a reliable physical model that tells us when accumulated stress will be released.
  • Proposed precursory phenomena have not proven consistent or reliable. These include foreshock sequences, changes in seismicity patterns (seismic quiescence or activation), groundwater level fluctuations, radon gas emissions, and electromagnetic anomalies. Some of these have been observed before certain earthquakes but are absent before others, and they also occur without any subsequent earthquake.
  • The chaotic, nonlinear nature of fault systems means that small, unmeasurable differences in stress state can determine whether a fault ruptures now or decades from now.

The 1975 Haicheng earthquake in China is often cited as a successful prediction (based on foreshock activity), but the 1976 Tangshan earthquake, which killed over 240,000 people, came with no recognized precursors. This contrast illustrates the fundamental unreliability of current precursor-based approaches.

Long-term Forecasting and Consequences

Long-term forecasting is more tractable and forms the practical basis for hazard mitigation. Two common approaches:

  • Time-independent models use average recurrence rates (from Gutenberg-Richter statistics) and assume earthquakes are equally likely at any time. These are the simplest and most widely used in PSHA.
  • Time-dependent models (renewal models, stress transfer models) account for the time elapsed since the last major event on a fault. The idea is that probability increases as stress re-accumulates. Coulomb stress transfer models also consider how a nearby earthquake may have brought a fault closer to (or further from) failure.

Even with these tools, forecasts carry substantial uncertainty. The societal stakes of getting it wrong are high:

  • False alarms can trigger unnecessary evacuations, economic disruption, and erosion of public trust in scientific institutions.
  • Missed events can lead to catastrophic losses if communities are unprepared.

Communicating probabilistic forecasts to the public and to policymakers is itself a major challenge. Probabilities like "a 62% chance of a M6.7M \geq 6.7 earthquake in the San Francisco Bay Area within 30 years" (from the USGS 2007 Working Group) are meaningful to scientists but can be difficult for non-specialists to interpret and act on.

Seismic Hazard Maps for Risk Mitigation

Applications in Building Codes and Land-use Planning

Seismic hazard maps are the primary way PSHA results reach engineers, planners, and policymakers. They display expected ground motion levels (usually PGA or spectral acceleration at specific periods) for a chosen probability of exceedance across a region.

Their applications include:

  • Building codes — Modern risk-targeted codes (e.g., ASCE 7 in the United States, Eurocode 8 in Europe) use hazard map values to set minimum design requirements for structures. A building in Los Angeles faces different design ground motions than one in Houston, and the hazard map quantifies that difference.
  • Seismic retrofitting — Hazard maps help prioritize which existing buildings and infrastructure need strengthening first, focusing resources on the highest-hazard areas.
  • Land-use planning — Zoning regulations can steer critical facilities (hospitals, schools, fire stations) and lifelines (bridges, water mains, power grids) away from the most hazardous zones, including areas prone to surface rupture, liquefaction, or landslides.

Risk Assessment and Emergency Response Planning

Beyond engineering design, hazard maps support broader risk management:

  • Insurance — Companies use seismic hazard data to assess portfolio risk and set premiums for earthquake insurance. Higher-hazard zones correspond to higher premiums.
  • Emergency response planning — Hazard maps inform the design of evacuation routes, placement of shelters and medical facilities, and pre-positioning of supplies for post-earthquake response.
  • Scenario modeling — Scenario-based maps simulate ground motions and impacts from specific plausible earthquakes (e.g., a MM 7.8 on the San Andreas Fault). These are used for emergency drills, public education campaigns (like California's ShakeOut), and loss estimation studies.

Seismic hazard maps are not static products. They require regular updates as new data become available, GMPEs are refined, previously unknown faults are identified, and our understanding of regional tectonics evolves. For example, the U.S. National Seismic Hazard Model is updated roughly every six years to incorporate the latest science.