Remote Sensing Platforms and Capabilities
Satellite and airborne remote sensing let us observe Earth's surface, atmosphere, and subsurface properties without direct contact. These platforms collect electromagnetic radiation reflected or emitted by the surface, and the choice of platform determines your spatial resolution, coverage area, and how often you can revisit the same location.
Satellite Platforms
Satellite platforms orbit Earth and collect data at regular intervals, providing global coverage and repeated observations over time. Major missions include Landsat (30 m resolution, since 1972), MODIS (250 m to 1 km, near-daily coverage), and Sentinel-2 (10-20 m, 5-day revisit).
Satellites are classified by orbit type, and each orbit suits different applications:
- Geostationary orbit (e.g., GOES): The satellite matches Earth's rotation, staying fixed above one point at ~35,786 km altitude. This allows continuous monitoring of weather systems and atmospheric dynamics over a specific region, but spatial resolution is coarse.
- Polar/sun-synchronous orbit (e.g., Landsat, NOAA satellites): The satellite passes near both poles, gradually covering the entire globe. Sun-synchronous orbits cross the equator at the same local solar time each pass, which keeps illumination conditions consistent between acquisitions.
The choice of platform depends on the tradeoff between spatial resolution, temporal resolution (revisit time), and area of coverage. Weather forecasting needs frequent revisits (geostationary), while detailed land cover mapping needs finer spatial resolution (polar-orbiting).
Airborne Platforms
Airborne platforms (aircraft and drones) operate at lower altitudes, which gives them higher spatial resolution and more flexibility in when and where data is collected.
- Aircraft can carry optical, thermal, and radar sensors, and they're well suited for targeted surveys over mineral exploration sites, agricultural fields, or urban areas.
- Drones (UAVs) achieve even finer resolution (centimeter-scale) and can be deployed rapidly for small-area surveys or emergency response.
- Airborne platforms are particularly useful for detailed mapping, infrastructure monitoring, and environmental assessment where satellite resolution isn't sufficient.
The tradeoff: airborne data covers smaller areas and costs more per unit area than satellite data, but the spatial detail and scheduling flexibility are far superior.
Multispectral vs. Hyperspectral Imaging
Both techniques measure reflected electromagnetic radiation across multiple wavelength bands, but they differ in spectral resolution, which determines how finely you can distinguish materials.
Multispectral Imaging
Multispectral sensors capture data in a small number of discrete, relatively broad spectral bands (typically 3-10) spanning visible, near-infrared (NIR), and shortwave infrared (SWIR) wavelengths. Landsat 8, for example, has 11 bands, while Sentinel-2 has 13.
Because there are fewer bands, multispectral data is faster to process and easier to work with. It's the workhorse for:
- Land cover classification (forest vs. urban vs. water)
- Vegetation health monitoring (using NIR and red bands)
- Regional-to-global geological mapping
The limitation is that broad bands can't distinguish materials with similar overall reflectance but subtle spectral differences.

Hyperspectral Imaging
Hyperspectral sensors collect data across hundreds of narrow, contiguous bands (often 100-200+), producing a near-continuous spectral signature for every pixel. A spectral signature is the unique pattern of reflectance and absorption across wavelengths that acts like a fingerprint for a material.
This high spectral resolution lets you detect subtle variations that multispectral sensors miss:
- Distinguishing between mineral species with similar color but different absorption features (e.g., kaolinite vs. montmorillonite clays)
- Identifying soil composition and contamination
- Assessing water quality parameters like chlorophyll concentration
Processing hyperspectral data is more computationally demanding. Common analysis techniques include:
- Principal Component Analysis (PCA): Reduces the hundreds of bands to a smaller number of uncorrelated components that capture most of the variance, making the data more manageable.
- Spectral unmixing: Decomposes each pixel's spectrum into contributions from known "endmember" spectra, estimating the fractional abundance of each material within the pixel.
Key distinction: Multispectral imaging tells you what general category a surface belongs to. Hyperspectral imaging can tell you what specific material is present.
Deriving Geophysical Parameters
Raw remote sensing data needs significant processing before it yields meaningful geophysical information. The workflow moves from pre-processing through parameter retrieval to validation.
Pre-processing and Corrections
Before any analysis, three types of corrections are applied:
- Radiometric correction: Accounts for sensor calibration differences and removes instrument noise, ensuring measurements are consistent across sensors and time periods.
- Atmospheric correction: Removes the effects of atmospheric scattering (Rayleigh, aerosol) and gas absorption (water vapor, ozone) to recover true surface reflectance. Without this step, two images of the same surface taken on different days could look very different.
- Geometric correction (orthorectification): Corrects for terrain displacement, sensor viewing angle, and Earth's curvature so that pixels align accurately with real-world coordinates and other geospatial datasets.
Parameter Retrieval Algorithms
Once you have corrected surface reflectance or radiance, various algorithms extract geophysical parameters:
- Vegetation indices: The Normalized Difference Vegetation Index (NDVI) is calculated as:
Healthy vegetation strongly reflects NIR and absorbs red light, so NDVI values near +1 indicate dense, healthy vegetation, while values near 0 indicate bare soil or water.
- Land surface temperature (LST): Derived from thermal infrared bands using split-window or single-channel algorithms that correct for surface emissivity and atmospheric effects.
- Soil moisture: Estimated from microwave (radar) data, since the dielectric constant of soil increases sharply with water content, changing the backscatter signal.
- Digital elevation models (DEMs): Generated from stereo optical imagery or radar interferometry (InSAR), representing surface topography at resolutions from ~30 m (SRTM) down to sub-meter (lidar).
- Spectral unmixing: Linear spectral unmixing estimates the fractional abundance of different materials within a mixed pixel by modeling the pixel's spectrum as a weighted sum of endmember spectra.

Time-Series Analysis and Validation
Repeated observations over time enable monitoring of dynamic processes:
- Detecting land cover change (deforestation, urban expansion)
- Tracking vegetation phenology (green-up, senescence cycles)
- Measuring surface deformation trends (subsidence, uplift)
Multitemporal data reveals trends, seasonal patterns, and anomalies that single-date imagery cannot capture.
Validation is essential. Derived parameters must be compared against independent reference data from field surveys, in-situ sensors, or ground-truthing campaigns. Common accuracy metrics include:
- Root mean square error (RMSE): Quantifies the average magnitude of prediction errors.
- Correlation coefficient ( or ): Measures how well remote sensing estimates track ground observations.
Remote Sensing Techniques for Geophysical Applications
Optical Remote Sensing
Optical sensors capture reflected sunlight in visible and infrared wavelengths. This is the most widely used remote sensing approach for surface characterization.
- Land cover classification uses algorithms like maximum likelihood, support vector machines, or random forests applied to multispectral imagery to map forests, urban areas, water bodies, and agricultural land.
- Vegetation monitoring relies on spectral indices (NDVI, EVI) to assess plant health, productivity, and seasonal cycles.
- Geological applications include mineral mapping using diagnostic absorption features, lithological discrimination between rock types, and structural analysis of faults and folds from lineament patterns.
The main limitation: optical sensors cannot see through clouds, and they only work during daylight.
Thermal Infrared Remote Sensing
Thermal sensors detect emitted radiation (typically 8-14 μm wavelength) rather than reflected sunlight, measuring surface temperature.
- Urban heat islands: Thermal imagery maps temperature differences between built-up areas and surrounding vegetation.
- Volcanic monitoring: Detects thermal anomalies at active volcanoes before and during eruptions.
- Evapotranspiration: LST is a key input for energy balance models that estimate water loss from agricultural fields.
- Thermal inertia mapping: Materials heat and cool at different rates. By comparing day and night thermal images, you can infer soil moisture and distinguish rock types based on their thermal properties.
Radar and Lidar Remote Sensing
These active sensors emit their own energy, which makes them independent of sunlight and (for radar) cloud cover.
Synthetic Aperture Radar (SAR):
- Operates at microwave wavelengths that penetrate clouds, rain, and darkness.
- InSAR (Interferometric SAR) compares the phase of radar returns from two passes over the same area to measure surface deformation with millimeter-scale precision. Applications include earthquake displacement mapping, volcanic inflation, and land subsidence from groundwater extraction.
- Polarimetric SAR transmits and receives radar in different polarization states (HH, VV, HV), providing information on surface roughness, soil moisture, and vegetation biomass.
Lidar (Light Detection and Ranging):
- Emits laser pulses and measures the return time to build high-resolution 3D point clouds of the surface.
- Airborne Laser Scanning (ALS) produces DEMs at sub-meter resolution and can penetrate forest canopy to map bare-earth topography beneath.
- Used for topographic mapping, forest inventory (canopy height, biomass estimation), fault scarp detection, and urban 3D modeling.
Multi-sensor Integration
No single sensor captures everything. Combining data from different platforms and sensor types provides complementary information and improves accuracy.
- Pan-sharpening fuses a high-resolution panchromatic band with lower-resolution multispectral bands to produce a sharp, color image.
- Optical + radar fusion improves land cover classification by combining spectral information (optical) with structural and moisture information (SAR).
- Lidar + hyperspectral integration enables simultaneous analysis of vegetation structure (3D canopy from lidar) and species composition (spectral signatures from hyperspectral).
- Decision-level fusion combines classification results from different sensors rather than merging raw data, letting each sensor contribute where it performs best.
These multi-sensor approaches leverage the strengths of each technique to build a more complete picture of geophysical processes than any single sensor could provide alone.