Stationarity in Time Series Analysis
Stationarity is the idea that a time series behaves the same way no matter when you observe it. Its statistical properties don't drift or shift over time. This matters because most classical time series models (ARMA, ARIMA) assume stationarity. If your data violates that assumption, your model's forecasts and parameter estimates can be misleading or flat-out wrong.
This section covers what stationarity actually means, the specific properties that define it, why it matters for modeling, and the distinction between strict and weak stationarity.
Stationarity in Time Series
A time series is stationary when its statistical properties stay constant over time. The mean doesn't trend upward or downward, the variance doesn't expand or shrink, and the correlation structure between observations at different time points doesn't change.
Why does this matter so much?
- Many core models (AR, MA, ARMA) are built on the assumption that the data-generating process is stable. If the process itself is shifting, these models can't reliably learn patterns from historical data and project them forward.
- A non-stationary series can produce spurious relationships, where two variables appear correlated simply because they both trend in the same direction, not because they're actually related.
- Trends, seasonality, or changing variance can all mask the true underlying patterns you're trying to model.
The bottom line: before you fit most time series models, you need to check whether your data is stationary, and if it isn't, you need to transform it until it is.

Properties of a Stationary Series
Three specific properties define (weak) stationarity. All three must hold simultaneously.
1. Constant mean — The expected value doesn't change over time.
This means the series fluctuates around the same level throughout. If you see an upward or downward trend, the mean isn't constant.
2. Constant variance — The spread of values stays stable.
If the series becomes more volatile in later periods (common with financial data like stock prices), the variance isn't constant.
3. Constant autocovariance — The covariance between any two observations depends only on the lag between them, not on when they occur.
For example, the correlation between observations 1 day apart should be the same whether you measure it in January or July. The function depends on the lag , but not on the time index .

Implications of Stationarity
Stationarity is what allows you to use historical patterns to make predictions about the future. If the statistical behavior of a series keeps changing, past patterns aren't a reliable guide.
Models that rely on stationarity:
- Autoregressive (AR) models predict the current value using a linear combination of its own past values.
- Moving Average (MA) models predict the current value using a linear combination of past forecast errors.
- ARMA models combine both AR and MA components into a single framework.
- ARIMA models extend ARMA by adding a differencing step (the "I" stands for "Integrated"), which can convert a non-stationary series into a stationary one before fitting the ARMA part.
When your data isn't stationary, you typically need to transform it first:
- Differencing removes trends by computing the change between consecutive observations. For example, instead of modeling raw GDP, you model the change in GDP from one quarter to the next. Seasonal differencing (e.g., subtracting the value from 12 months ago) handles seasonal patterns.
- Variance-stabilizing transformations like logarithmic or square-root transforms can fix non-constant variance. Stock prices, for instance, often have variance that grows with the price level; taking the log compresses that growth.
Strict vs. Weak Stationarity
These are two different levels of the stationarity requirement, and the distinction matters for understanding what models actually assume.
Strict stationarity requires that the entire joint probability distribution of any collection of time points is unchanged when you shift them in time:
for all choices of time points and all shifts . This is a very strong condition. It means every statistical property (not just mean and variance, but skewness, kurtosis, and the full shape of the distribution) must be time-invariant. In practice, strict stationarity is nearly impossible to verify because you'd need complete knowledge of the distribution.
Weak stationarity (also called covariance stationarity) only requires the three properties listed above: constant mean, constant variance, and autocovariance that depends only on lag. This is the version that matters in practice, and it's what people usually mean when they say "stationary" in the context of time series modeling.
For a Gaussian (normally distributed) time series, weak stationarity actually implies strict stationarity. That's because a Gaussian distribution is fully determined by its mean and covariance structure. For non-Gaussian series, weak stationarity is genuinely a weaker condition.
You can assess weak stationarity through visual inspection (plotting the series and looking for trends or changing spread) and formal statistical tests like unit root tests, which you'll encounter later in this unit.