Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Weather forecasting models are the backbone of modern meteorology, and understanding how they work—not just their names—is what separates surface-level knowledge from true comprehension. You're being tested on the fundamental principles that drive atmospheric prediction: numerical weather prediction theory, spatial resolution trade-offs, ensemble uncertainty quantification, and the distinction between deterministic and probabilistic forecasting. These concepts appear repeatedly in exam questions about forecast accuracy, model selection, and the inherent limitations of atmospheric prediction.
Don't fall into the trap of memorizing model acronyms without understanding their purpose. Each model exists because it solves a specific forecasting problem—whether that's capturing fine-scale convection, providing long-range guidance, or quantifying forecast uncertainty. Know what makes each model unique and when meteorologists choose one over another. That comparative thinking is exactly what FRQ prompts target when they ask you to evaluate forecast tools for different scenarios.
These are the workhorses of medium-range forecasting, providing baseline predictions for the entire planet. They solve the primitive equations of atmospheric motion on a global grid, trading some local detail for comprehensive planetary coverage.
Compare: GFS vs. ECMWF—both provide global medium-range forecasts, but ECMWF's finer resolution and advanced data assimilation typically yield better accuracy beyond day 3. If an FRQ asks which model to trust for a 7-day forecast, ECMWF is generally the answer.
When global models lack sufficient detail, regional models fill the gap. These nested or limited-area models sacrifice global coverage to achieve finer spatial resolution, capturing mesoscale phenomena that coarser grids miss entirely.
Compare: NAM vs. HRRR—both cover North America, but HRRR's 3 km resolution and hourly updates make it superior for convective forecasting, while NAM's longer range (84 hours vs. 18 hours) provides extended guidance. Choose based on whether you need detail or lead time.
Single deterministic forecasts hide inherent uncertainty. Ensemble systems run multiple simulations with slightly perturbed initial conditions or model physics, revealing the range of possible outcomes and quantifying forecast confidence.
Compare: Raw NWP output vs. MOS—numerical models provide physically consistent atmospheric fields, but MOS corrections typically improve point forecasts by 10-20% by accounting for local effects and systematic model errors. Both are necessary for operational forecasting.
Understanding the theoretical basis of numerical prediction is essential for evaluating any specific model's strengths and limitations.
Compare: Deterministic NWP vs. Ensemble systems—deterministic models give one answer assuming perfect knowledge, while ensembles acknowledge uncertainty by showing the spread of possible solutions. Modern forecasting requires both approaches.
| Concept | Best Examples |
|---|---|
| Global medium-range forecasting | ECMWF, GFS, UK Met Office Unified Model |
| High-resolution convective prediction | HRRR, WRF |
| Regional mesoscale forecasting | NAM, GEM |
| Ensemble/probabilistic methods | EPS, ECMWF ensemble |
| Statistical post-processing | MOS |
| Research and customizable applications | WRF |
| Coupled Earth system modeling | UK Met Office Unified Model, GEM |
| Rapid-update cycling | HRRR, NAM |
Which two models would you compare when evaluating the trade-off between forecast range and spatial resolution for predicting a severe weather outbreak?
Why does ECMWF typically outperform GFS in medium-range verification scores, and what specific model characteristics contribute to this difference?
Compare and contrast deterministic NWP output with ensemble prediction systems—when would a forecaster prefer probabilistic guidance over a single deterministic solution?
If an FRQ asks you to recommend a model for predicting flash flooding from afternoon thunderstorms, which model would you choose and why does its resolution matter?
Explain how MOS improves upon raw numerical model output—what fundamental limitation of NWP does statistical post-processing address?