Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Stationarity is the foundation of time series modeling. If you don't get this right, everything that follows falls apart. Most classical forecasting methods (ARIMA, exponential smoothing, and beyond) assume your data has stable statistical properties over time. When you're tested on stationarity, you're really being tested on your ability to diagnose a time series before modeling it and justify your preprocessing decisions.
These tests reveal the underlying structure of your data. You'll need to understand the difference between testing for a unit root versus testing for stationarity (they're not the same thing), recognize when visual diagnostics complement formal tests, and know which test to reach for when your data has quirks like structural breaks or seasonality. Don't just memorize test names. Know what null hypothesis each test uses and when to combine tests for a complete picture.
These tests ask: "Does this series have a unit root that makes it non-stationary?" The null hypothesis assumes non-stationarity, so you're looking for evidence against the null to conclude stationarity. A unit root means shocks to the series persist forever rather than dying out.
The ADF test is the most commonly used unit root test in econometrics and finance. Expect it on any exam covering stationarity.
The PP test shares the same null hypothesis as ADF (unit root present) but takes a different approach to handling autocorrelation in the errors.
Standard ADF and PP tests can lose power badly when the data contains a sudden shift in mean or trend. The Zivot-Andrews test addresses this directly.
Compare: ADF vs. PP: both test the same null hypothesis (unit root), but PP handles autocorrelation non-parametrically while ADF adds lagged terms. Use PP when you suspect heteroskedasticity; use ADF when you want more control over lag specification.
Unlike unit root tests, these tests flip the null hypothesis. They assume stationarity and look for evidence against it. This reversal is critical for exam questions asking you to distinguish between test types.
The KPSS test is the natural complement to the ADF test because it approaches the question from the opposite direction.
Compare: ADF vs. KPSS: they test opposite null hypotheses. When both agree (ADF rejects unit root, KPSS fails to reject stationarity), you can be confident the series is stationary. When they conflict, the situation is ambiguous and warrants further investigation.
Before running formal tests, visual diagnostics help you understand your data's structure. After fitting models, these same tools verify that your assumptions hold. ACF and PACF plots are your first line of defense in time series analysis.
The ACF plot shows the correlation between a series and its own lagged values at each lag.
The PACF isolates the direct relationship between and after removing the effects of all intermediate lags (1 through ).
After fitting a model, you need to check whether the residuals still contain autocorrelation. The Ljung-Box test does exactly this.
Compare: ACF vs. PACF: ACF shows total correlation at each lag (including indirect effects through intermediate lags), while PACF isolates direct effects only. For AR model identification, watch where PACF cuts off. For MA identification, watch where ACF cuts off.
Some time series have specific structures that require targeted testing. Standard unit root tests may be insufficient or inappropriate in these cases.
The variance ratio test directly targets the random walk hypothesis. If a series is a true random walk, its variance should scale linearly with the time horizon.
Standard ADF only tests the zero-frequency (long-run) unit root. But a series can also have unit roots at seasonal frequencies, which require separate detection.
Compare: Standard unit root tests vs. Seasonal unit root tests: ADF/PP detect non-stationarity in the trend, while HEGY and seasonal tests detect non-stationarity in periodic patterns. You may need both types of differencing for a single series.
| Concept | Best Examples |
|---|---|
| Unit root detection (null: non-stationary) | ADF, PP, Zivot-Andrews |
| Stationarity testing (null: stationary) | KPSS |
| Structural break accommodation | Zivot-Andrews |
| Visual autocorrelation diagnosis | ACF plot, PACF plot |
| Model adequacy / residual checking | Ljung-Box test |
| Random walk hypothesis | Variance ratio test |
| Seasonal non-stationarity | HEGY test, Canova-Hansen test |
| AR order identification | PACF plot |
You run an ADF test and get a p-value of 0.03, then run a KPSS test and get a p-value of 0.15. What do these results together tell you about stationarity, and why is using both tests more informative than using just one?
Which two tests share the same null hypothesis (unit root present) but handle autocorrelation differently? When would you prefer one over the other?
Your ACF plot shows slow decay over many lags while your PACF shows a sharp cutoff after lag 2. What does this pattern suggest about (a) stationarity and (b) potential model structure?
Compare and contrast how you would test for stationarity in a quarterly GDP series that experienced a major policy change mid-sample versus a series with no obvious structural breaks.
A colleague claims their residuals are fine because the ACF plot looks clean. What formal test should they run to support this claim, and what null hypothesis would they be testing?