Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Weather forecasting models are the backbone of modern meteorology, and understanding how they work is what separates surface-level knowledge from real comprehension. You're being tested on the principles that drive atmospheric prediction: numerical weather prediction theory, spatial resolution trade-offs, ensemble uncertainty quantification, and the distinction between deterministic and probabilistic forecasting. These concepts show up repeatedly in exam questions about forecast accuracy, model selection, and the inherent limits of atmospheric prediction.
Don't fall into the trap of memorizing model acronyms without understanding their purpose. Each model exists because it solves a specific forecasting problem, whether that's capturing fine-scale convection, providing long-range guidance, or quantifying forecast uncertainty. Know what makes each model unique and when meteorologists choose one over another. That comparative thinking is exactly what FRQ prompts target when they ask you to evaluate forecast tools for different scenarios.
These are the workhorses of medium-range forecasting, providing baseline predictions for the entire planet. They solve the primitive equations of atmospheric motion on a global grid, trading some local detail for comprehensive planetary coverage. "Deterministic" means each run produces a single forecast rather than a range of outcomes.
Compare: GFS vs. ECMWF: both provide global medium-range forecasts, but ECMWF's finer resolution and superior data assimilation typically yield better accuracy beyond day 3. If a question asks which model to trust for a 7-day forecast, ECMWF is generally the answer. GFS remains valuable because its free data availability means more forecasters use and verify it.
When global models lack sufficient detail, regional models fill the gap. These limited-area models sacrifice global coverage to achieve finer spatial resolution, capturing mesoscale phenomena that coarser grids miss entirely. They depend on a global model to supply boundary conditions at the edges of their domain.
The distinction between "convection-allowing" and "convection-parameterizing" is worth understanding. At grid spacings larger than ~4 km, thunderstorms are smaller than a single grid cell, so the model must use a simplified scheme to estimate their effects. At 3 km, the HRRR can resolve the updrafts and downdrafts of storms directly, producing far more realistic convective structures.
Compare: NAM vs. HRRR: both cover North America, but HRRR's 3 km resolution and hourly updates make it superior for convective forecasting, while NAM's longer range (84 hours vs. 18 hours) provides extended guidance. Choose based on whether you need detail or lead time.
A single deterministic forecast hides inherent uncertainty. Ensemble systems run multiple simulations with slightly perturbed initial conditions or model physics, revealing the range of possible outcomes and quantifying forecast confidence.
The core idea behind ensembles is straightforward: if you slightly change the starting conditions of a forecast and the answer changes dramatically, you have low confidence. If all the members agree, confidence is high.
The ECMWF runs a 51-member ensemble, and NOAA operates the Global Ensemble Forecast System (GEFS) with 31 members. Both are widely used for medium-range probabilistic guidance.
MOS isn't a weather model in the traditional sense. It's a post-processing technique that corrects raw model output using statistics.
Compare: Raw NWP output vs. MOS: numerical models provide physically consistent atmospheric fields, but MOS corrections typically improve point forecasts by removing local biases and systematic model errors. Both are necessary for operational forecasting. Raw model output tells you what the atmosphere is doing; MOS tells you what that means for a specific location.
Understanding the theoretical basis of numerical prediction is essential for evaluating any specific model's strengths and limitations.
All the models above rest on the same foundation: solving the primitive equations, which are mathematical expressions of conservation of mass, momentum, and energy governing atmospheric motion.
Here's how NWP works in practice:
Initial condition sensitivity is the fundamental limit on weather prediction. Small errors in the initial analysis grow over time due to the chaotic nature of atmospheric dynamics, which is why forecast skill drops sharply beyond about 10 days and useful deterministic prediction is generally capped at roughly 10-14 days.
Compare: Deterministic NWP vs. Ensemble systems: deterministic models give one answer assuming we know the current state perfectly, while ensembles acknowledge uncertainty by showing the spread of possible solutions. Modern operational forecasting requires both approaches.
| Concept | Best Examples |
|---|---|
| Global medium-range forecasting | ECMWF, GFS, UK Met Office Unified Model |
| High-resolution convective prediction | HRRR, WRF |
| Regional mesoscale forecasting | NAM, GEM |
| Ensemble/probabilistic methods | GEFS, ECMWF ensemble |
| Statistical post-processing | MOS |
| Research and customizable applications | WRF |
| Coupled Earth system modeling | UK Met Office Unified Model, GEM |
| Rapid-update cycling | HRRR |
Which two models would you compare when evaluating the trade-off between forecast range and spatial resolution for predicting a severe weather outbreak?
Why does ECMWF typically outperform GFS in medium-range verification scores, and what specific model characteristics contribute to this difference?
Compare deterministic NWP output with ensemble prediction systems. When would a forecaster prefer probabilistic guidance over a single deterministic solution?
If a question asks you to recommend a model for predicting flash flooding from afternoon thunderstorms, which model would you choose and why does its resolution matter?
Explain how MOS improves upon raw numerical model output. What fundamental limitation of NWP does statistical post-processing address?