Fiveable

🗺️World Geography Unit 23 Review

QR code for World Geography practice questions

23.2 Remote Sensing Technologies and Data Analysis

23.2 Remote Sensing Technologies and Data Analysis

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🗺️World Geography
Unit & Topic Study Guides

Remote Sensing Technologies and Data Analysis

Remote sensing technologies let us observe and analyze Earth without being physically present. From satellites orbiting hundreds of kilometers above us to drones flying over a farmer's field, these tools capture detailed images and data about Earth's surface, atmosphere, and oceans.

When you combine remote sensing with GIS, you get a powerful toolkit used across many fields. Scientists and planners rely on this pairing to map land use, manage resources, monitor disasters, and plan cities. Think of remote sensing as the "eyes" that gather data, and GIS as the "brain" that organizes and analyzes it.

Remote Sensing Principles and Techniques

Electromagnetic Radiation and Remote Sensing

All remote sensing depends on one core idea: objects on Earth's surface reflect, absorb, or emit electromagnetic (EM) radiation, and sensors can detect that radiation from a distance. Different materials (water, vegetation, concrete) interact with EM radiation differently, which is how sensors can tell them apart.

Remote sensing systems fall into two categories:

  • Passive systems detect natural energy that is either reflected sunlight or radiation emitted by Earth's surface. Most passive sensors work in the visible, infrared, thermal infrared, and microwave portions of the EM spectrum. A regular camera is a simple example of a passive sensor.
  • Active systems supply their own energy source. They send out a pulse of radiation toward a target, then measure what bounces back. Radar and LiDAR are the two most common active systems. Because they generate their own signal, active sensors can collect data at night and through cloud cover.

Sensor Characteristics and Resolutions

Sensors are described by four types of resolution. Understanding these helps you evaluate what a particular sensor can and can't do.

  • Spatial resolution refers to the size of the smallest feature a sensor can detect. A sensor with 1-meter spatial resolution can distinguish objects 1 meter across. Higher spatial resolution means more detail, but usually covers a smaller area per image.
  • Spectral resolution describes how finely a sensor divides the EM spectrum into separate bands. A sensor with finer spectral resolution captures narrower wavelength ranges per band, making it better at distinguishing materials with similar appearances.
  • Radiometric resolution is the sensor's sensitivity to differences in energy. Higher radiometric resolution means the sensor can detect smaller differences in reflected or emitted energy, producing images with more shades of gray (or color) between the brightest and darkest values.
  • Temporal resolution is how frequently a sensor revisits and images the same area. A satellite that passes over the same spot every 16 days has coarser temporal resolution than one that revisits every 2 days. Temporal resolution depends on the satellite's orbit, the sensor's swath width, and latitude.

There's often a trade-off between these resolutions. A satellite with very high spatial resolution may have lower temporal resolution because it covers less ground per pass.

Platforms for Remote Sensing

Satellites and Orbits

Satellites serve as the most common platforms for remote sensors. Two basic orbit types matter here:

  • Geostationary orbit: The satellite matches Earth's rotation, staying fixed above one point on the equator at about 35,786 km altitude. This gives continuous coverage of the same area, which is why weather satellites (like GOES) use this orbit. The trade-off is lower spatial resolution due to the great distance.
  • Sun-synchronous (polar) orbit: The satellite passes over each part of Earth at roughly the same local solar time on each pass, typically at 600–800 km altitude. This ensures consistent lighting conditions for comparing images taken on different dates. Landsat satellites use this orbit.

Aerial Photography and UAVs

Aerial photography involves capturing images from manned aircraft like airplanes or helicopters. It offers very high spatial resolution but limited spectral resolution (usually just visible light or near-infrared). Aerial photos have been used for decades for mapping, surveying, and monitoring landscape changes.

Unmanned Aerial Vehicles (UAVs), or drones, are increasingly popular for remote sensing. They offer several advantages:

  • Very high spatial resolution (down to centimeters)
  • Flexibility in when and where you collect data
  • Lower cost per mission than manned aircraft

Their main limitations are shorter flight times, smaller coverage areas per flight, and limited payload capacity compared to traditional aircraft.

LiDAR Systems

LiDAR (Light Detection and Ranging) is an active remote sensing system that fires rapid laser pulses toward the ground and measures how long each pulse takes to return. By recording millions of return times, LiDAR builds a detailed 3D point cloud of the surface.

LiDAR is especially valuable for creating high-resolution digital elevation models (DEMs). Because laser pulses can penetrate gaps in forest canopy and return from the ground below, LiDAR can map terrain even under dense vegetation.

Interpretation of Remote Sensing Data

Image Interpretation Techniques

Interpreting remote sensing imagery means identifying objects in the image and understanding what they represent. Analysts look at several visual characteristics:

  • Tone/color: How bright or dark an object appears, or what color it displays
  • Texture: The smoothness or roughness of a surface in the image
  • Pattern: The spatial arrangement of objects (e.g., rows of crops vs. random tree placement)
  • Shape and size: Geometric properties that help distinguish human-made features (rectangular buildings) from natural ones (irregular lake shorelines)
  • Shadow: Can reveal height and shape of objects, but can also obscure features
  • Site and association: The location of a feature and what surrounds it (a building next to a runway is likely an airport terminal)

Digital Image Processing

Computers handle most image analysis today through digital image processing. The general workflow follows these steps:

  1. Preprocessing: Correct errors in the raw data. Radiometric corrections fix sensor-related distortions in brightness values. Geometric corrections fix spatial distortions so the image aligns accurately with real-world coordinates.
  2. Image enhancement: Adjust the image to make features easier to see. This includes contrast stretching, filtering, and color compositing.
  3. Image transformation: Create new images by mathematically combining bands. A common example is calculating the Normalized Difference Vegetation Index (NDVI), which highlights healthy vegetation by comparing red and near-infrared reflectance.
  4. Image classification: Assign each pixel to a land cover category. Two main approaches exist:
    • Supervised classification: The analyst selects representative sample areas ("training sites") for each land cover class, and the software uses those samples to classify the rest of the image.
    • Unsupervised classification: The software automatically groups pixels with similar spectral characteristics into clusters, and the analyst then labels each cluster after the fact.

Change Detection and Temporal Analysis

Change detection identifies differences in a landscape by comparing images of the same area taken at different times. For example, comparing satellite images of a coastal city before and after a hurricane reveals which areas flooded or suffered structural damage.

This technique requires multi-temporal datasets, meaning images from two or more dates that have been carefully preprocessed so differences in brightness are due to actual surface changes, not differences in lighting or sensor calibration.

Remote Sensing Applications with GIS

Integration of Remote Sensing and GIS

Remote sensing and GIS are complementary. Remote sensing provides the raw spatial data (satellite images, LiDAR point clouds, aerial photos), while GIS provides the tools to store, manage, analyze, and visualize that data alongside other datasets like census information or road networks.

Applications in Various Fields

  • Precision agriculture: Multispectral and hyperspectral imagery detects variations in plant health and soil conditions across a field. Farmers use NDVI maps to identify stressed crops early, estimate yields, and apply fertilizer only where it's needed, reducing costs and environmental impact.
  • Forestry: Satellite imagery tracks deforestation over time, while LiDAR data maps forest canopy height and estimates biomass. Brazil's PRODES program, for instance, uses Landsat imagery to monitor annual deforestation rates in the Amazon.
  • Disaster management: After natural disasters like hurricanes, floods, or wildfires, satellite imagery helps responders quickly map affected areas and prioritize where to send aid. Before-and-after image comparisons reveal the extent and severity of damage.
  • Urban planning: Remote sensing tracks urban sprawl by mapping how built-up areas expand over time. Planners use this data alongside GIS layers (zoning maps, transportation networks, population data) to make informed decisions about infrastructure and land use.
  • Environmental monitoring: Repeated satellite observations track changes in ice cover, sea level, water quality, and air pollution at regional and global scales.