Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Time series forecasting sits at the heart of business decision-making—from inventory planning and revenue projections to workforce scheduling and capital budgeting. You're being tested not just on whether you can name these models, but on whether you understand when each approach works best and why certain models outperform others in specific situations. The underlying principles here—stationarity, autocorrelation, seasonality, and the bias-variance tradeoff—show up repeatedly in exam questions.
Think of these models as tools in a toolkit: a skilled forecaster knows that an ARIMA model excels with stationary data, while Holt-Winters handles seasonal trends, and LSTM networks capture complex nonlinear patterns. Don't just memorize the acronyms—know what data characteristics each model addresses and when you'd choose one over another. That comparative thinking is what separates strong exam performance from mediocre recall.
These foundational models work by averaging or weighting historical observations to filter out noise and reveal underlying patterns. They're computationally simple and often surprisingly effective for stable, well-behaved data.
Compare: Moving Average vs. Exponential Smoothing—both smooth historical data, but MA weights all included observations equally while exponential smoothing prioritizes recent values. If an exam question describes rapidly changing conditions, exponential smoothing is typically the better choice.
These models exploit autocorrelation—the idea that past values of a series help predict future values. They're the workhorses of classical time series analysis.
Compare: ARIMA vs. SARIMA—both handle non-stationary data through differencing, but SARIMA adds seasonal differencing and seasonal AR/MA terms. If an FRQ presents monthly sales data with clear holiday spikes, SARIMA is your answer.
When multiple time series interact—like how advertising spend affects sales, which affects inventory—you need models that capture cross-variable dependencies.
Compare: VAR vs. State Space—VAR models observable relationships between multiple series, while state space models infer hidden dynamics driving observed data. State space is more flexible but requires more careful specification.
Some models are purpose-built to decompose time series into interpretable components—level, trend, and seasonal effects.
Compare: Holt-Winters vs. Prophet—both decompose series into trend and seasonality, but Prophet handles irregular events (holidays, promotions) more elegantly and tolerates messy data. Holt-Winters remains preferred when you need a lightweight, well-understood baseline.
When relationships are nonlinear or patterns are too complex for classical models, neural networks offer powerful alternatives—at the cost of interpretability.
Compare: ARIMA vs. LSTM—ARIMA assumes linear relationships and works well with limited data; LSTM captures nonlinear patterns but needs large datasets and careful regularization. For exam purposes, know that LSTM is the go-to when classical assumptions fail.
| Concept | Best Examples |
|---|---|
| Smoothing & Noise Reduction | Moving Average, Exponential Smoothing |
| Autocorrelation Modeling | AR, ARIMA |
| Seasonal Pattern Handling | SARIMA, Holt-Winters, Prophet |
| Multivariate Relationships | VAR, State Space Models |
| Nonlinear & Complex Patterns | LSTM Networks |
| Missing Data & Outliers | Prophet, State Space Models |
| Interpretability Priority | Holt-Winters, Exponential Smoothing |
| Stationarity Required | AR, MA, VAR |
Which two models both handle seasonality but differ in their treatment of irregular events like holidays? What makes one more suitable for messy real-world data?
If you're given a non-stationary time series with no seasonal pattern, which model family would you choose, and what does the "I" component accomplish?
Compare and contrast VAR and ARIMA: when would you choose a multivariate approach over a univariate one, and what assumption does VAR make about causality?
A retail company has 10 years of daily sales data with strong weekly and annual seasonality, occasional outliers from promotions, and several structural changes in trend. Rank Prophet, SARIMA, and LSTM in terms of suitability and justify your ranking.
What distinguishes state space models from other approaches in this list, and why might a forecaster choose the Kalman filter framework over direct ARIMA estimation?