Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Time series analysis is the backbone of forecasting. Your ability to decompose data into its fundamental components determines how well you can model real-world phenomena. The core skill here is distinguishing systematic patterns from random variation, isolating each component, and knowing when to apply specific techniques to make messy data analyzable.
These concepts connect directly to regression, ARIMA modeling, and forecast evaluation, so everything that comes later builds on what's covered here. When you see a time series, you should immediately ask: What's the trend? Is there seasonality? Is this stationary?
Systematic components are the portions of a time series that follow recognizable, modelable patterns. These are what forecasting models try to capture. Everything left over is noise.
Trend is the long-term directional movement in the data: upward (growth), downward (decline), or flat (stability).
Seasonality refers to fixed-period, recurring patterns that repeat at known intervals (daily, weekly, monthly, quarterly, or annually).
Cyclical patterns are longer-term fluctuations tied to business or economic cycles, typically spanning multiple years.
Compare: Seasonality vs. Cyclical Patterns: both create repeated ups and downs, but seasonality has a fixed, known period while cycles have variable duration. If an exam question describes a pattern repeating "every 3-5 years," that's cyclical. If it's "every December," that's seasonal.
Level is the baseline value around which the series fluctuates. Think of it as the "center of gravity" of your data.
Not everything in a time series can be explained by patterns. Understanding what's random, and confirming that your model residuals look random, is essential for valid inference.
Irregular fluctuations are unpredictable, non-repeating variations caused by one-time events like natural disasters, policy shocks, or market surprises.
White noise is a purely random series with three specific properties: , constant variance, and zero autocorrelation at all lags.
Compare: Irregular Fluctuations vs. White Noise: irregular fluctuations describe the concept of randomness in raw data, while white noise is a specific statistical property you test for in residuals. Your model succeeds when residuals behave like white noise.
These concepts determine whether standard time series methods will actually work on your data. Stationarity and autocorrelation are diagnostic checkpoints before any serious modeling begins.
A series is stationary when its statistical properties (mean, variance) stay constant over time, regardless of when you observe it.
Autocorrelation measures the correlation of a series with its own lagged values. It tells you how today's value relates to yesterday's, last week's, and so on.
Compare: Stationarity vs. Autocorrelation: stationarity is a property of the entire series (stable over time), while autocorrelation is a relationship between observations (dependence across lags). A series can be stationary with strong autocorrelation, or non-stationary with weak autocorrelation. They describe different things.
These methods help you extract insights from raw time series data. Decomposition separates the signal; smoothing reveals it.
Decomposition separates a time series into trend, seasonality, and residuals to reveal underlying structure hidden in the raw data.
There are two main forms:
A good rule of thumb: if a plot shows December spikes getting larger as overall sales grow, that's multiplicative. If the spikes stay about the same height year over year, that's additive.
A moving average is a smoothing technique that averages consecutive observations to reduce noise and reveal trend.
Compare: Decomposition vs. Moving Average: decomposition explicitly separates all components into distinct series, while moving average smooths the data to highlight trend without fully isolating seasonality. Use decomposition for thorough analysis; use moving averages for quick trend visualization or as building blocks in models.
| Concept | Best Examples |
|---|---|
| Systematic patterns | Trend, Seasonality, Cyclical patterns, Level |
| Random variation | Irregular fluctuations, White noise |
| Fixed-period patterns | Seasonality |
| Variable-period patterns | Cyclical patterns |
| Stationarity requirements | Constant mean, constant variance, no trend |
| Transformation techniques | Differencing, log transform, decomposition |
| Smoothing methods | Moving average (simple and weighted) |
| Model diagnostic tools | Autocorrelation (ACF/PACF), white noise tests |
A retail company notices sales increase every December but also sees larger overall swings during economic expansions. Which two components explain these patterns, and how do they differ in predictability?
You've fit a forecasting model and want to check if it captured all systematic patterns. What statistical property should the residuals exhibit, and how would you test for it?
Compare and contrast additive versus multiplicative decomposition. Under what data conditions would you choose one over the other?
Your time series has an upward trend and increasing variance over time. Which two techniques might you apply to achieve stationarity, and in what order?
If you're asked to "identify the appropriate model structure" for a dataset, which diagnostic tool would you use to examine dependence across time lags, and what patterns would suggest an AR versus MA component?