Fiveable

💹Financial Mathematics Unit 9 Review

QR code for Financial Mathematics practice questions

9.3 Time series analysis

9.3 Time series analysis

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
💹Financial Mathematics
Unit & Topic Study Guides

Time series analysis studies sequential data points collected over time to identify patterns, trends, and relationships. In financial mathematics, these techniques drive forecasting, risk management, and investment decisions. This guide covers the core components, models, statistical tests, and practical applications you'll need to know.

Fundamentals of time series

A time series is simply a sequence of data points recorded at successive time intervals. Stock prices recorded daily, quarterly GDP figures, monthly unemployment rates: these are all time series. The goal is to understand what's driving the data so you can model it and forecast where it's headed.

Components of time series

Every time series can be broken into four components:

  • Trend captures the long-term direction. Is the series generally rising, falling, or flat over years or decades?
  • Seasonal captures regular patterns that repeat at fixed intervals. Retail sales spike every December; energy consumption rises every summer.
  • Cyclical reflects longer-term fluctuations that aren't tied to a calendar schedule. Business cycles of expansion and recession typically last 2-10 years.
  • Irregular (also called "noise") accounts for random, unpredictable variation that doesn't fit the other three components.

Decomposition is the process of separating a time series into these four components. Once separated, you can analyze each one independently, which makes forecasting much more tractable.

Stationarity vs non-stationarity

A stationary time series has constant statistical properties over time: its mean, variance, and autocorrelation structure don't change. Most time series models assume stationarity, so this concept matters a lot.

A non-stationary series has properties that shift over time. A stock price that trends upward for years is non-stationary because its mean keeps changing.

To convert a non-stationary series to a stationary one, you can:

  1. Differencing: Subtract each value from the previous one (first-order differencing). Apply again if needed.
  2. Detrending: Fit and remove a trend line from the data.
  3. Seasonal adjustment: Remove the seasonal component.

The Augmented Dickey-Fuller (ADF) test is the standard way to check for stationarity. Its null hypothesis is that the series contains a unit root (non-stationary). If you reject the null, you have evidence the series is stationary.

Autocorrelation and partial autocorrelation

Autocorrelation measures how correlated a time series is with its own past values. If today's stock return is correlated with yesterday's return, that's autocorrelation at lag 1.

  • The ACF (Autocorrelation Function) plot shows correlation coefficients at each lag. It helps you spot seasonality and determine the order of MA models.
  • Partial autocorrelation measures the correlation between a series and a specific lag after removing the effects of all intermediate lags. Think of it as the "direct" relationship at that lag.
  • The PACF (Partial Autocorrelation Function) plot helps you determine the order of AR models.

Together, ACF and PACF plots are your primary tools for identifying which time series model to use.

Time series models

These models capture the structure in time series data so you can generate forecasts and understand underlying dynamics.

Autoregressive (AR) models

An AR model predicts the current value as a linear combination of its own past values plus a random error term. The order pp tells you how many past values are included.

The AR(1) model:

Yt=c+ϕ1Yt1+ϵtY_t = c + \phi_1 Y_{t-1} + \epsilon_t

Here, cc is a constant, ϕ1\phi_1 is the coefficient on the first lag, and ϵt\epsilon_t is white noise error. An AR(2) model adds ϕ2Yt2\phi_2 Y_{t-2}, and so on.

AR models work well for series that show persistence, where past values have a strong influence on future values. On a PACF plot, an AR(p) process shows significant spikes at lags 1 through pp, then cuts off.

Moving average (MA) models

An MA model predicts the current value as a function of past forecast errors (not past values). The order qq tells you how many past error terms are included.

The MA(1) model:

Yt=μ+ϵt+θ1ϵt1Y_t = \mu + \epsilon_t + \theta_1 \epsilon_{t-1}

Here, μ\mu is the mean, ϵt\epsilon_t is the current error, and θ1\theta_1 weights the previous error. MA models are effective for series driven by short-term shocks or random fluctuations.

On an ACF plot, an MA(q) process shows significant spikes at lags 1 through qq, then cuts off.

ARMA and ARIMA models

ARMA(p,q) combines both AR and MA components into one model, with pp autoregressive terms and qq moving average terms. This handles series that have both persistent trends and short-term shock effects. ARMA requires the data to be stationary.

ARIMA(p,d,q) adds an integration step: the dd parameter specifies how many times you difference the series to achieve stationarity. For example, ARIMA(1,1,1) means: difference once, then fit an ARMA(1,1) model.

The Box-Jenkins methodology is the standard process for building ARIMA models:

  1. Identification: Use ACF and PACF plots to determine candidate values of pp, dd, and qq.
  2. Estimation: Fit the model parameters using maximum likelihood or least squares.
  3. Diagnostic checking: Examine residuals to confirm they resemble white noise (no remaining patterns). If they don't, revise and repeat.

GARCH models for volatility

Financial returns often exhibit volatility clustering: periods of high volatility tend to be followed by more high volatility, and calm periods follow calm periods. Standard ARIMA models can't capture this because they assume constant variance.

GARCH (Generalized Autoregressive Conditional Heteroskedasticity) models address this by allowing variance to change over time. The basic GARCH(1,1) model:

σt2=ω+αϵt12+βσt12\sigma_t^2 = \omega + \alpha \epsilon_{t-1}^2 + \beta \sigma_{t-1}^2

  • ω\omega is a constant baseline variance
  • α\alpha captures how much the previous shock (ϵt12\epsilon_{t-1}^2) affects current volatility
  • β\beta captures how much the previous period's variance persists

Extensions like EGARCH and GJR-GARCH handle asymmetric volatility, where negative shocks ("bad news") tend to increase volatility more than positive shocks of the same size. This asymmetry is well-documented in equity markets.

Time series decomposition

Decomposition separates a time series into its constituent parts so you can analyze each driver independently and improve forecast accuracy.

Trend analysis

The trend component shows the long-term direction of the series. Methods for estimating trend include:

  • Moving averages: Smooth out short-term fluctuations to reveal the underlying direction.
  • Regression analysis: Fit a linear or polynomial trend line to the data.
  • Exponential smoothing: Weight recent observations more heavily.

Detrending removes the trend component so you can focus on seasonal, cyclical, and irregular behavior. This is useful when you want to study whether a financial variable's short-term movements are unusual relative to its long-term path.

Seasonal adjustments

Seasonal patterns repeat at known, fixed intervals. Quarterly GDP data, for instance, often shows predictable patterns tied to holiday spending or agricultural cycles.

Common methods for seasonal adjustment:

  • Seasonal differencing: Subtract the value from the same season in the prior year.
  • Dummy variables: Include indicator variables for each season in a regression model.
  • X-11/X-13 ARIMA-SEATS: Industry-standard methods used by statistical agencies to produce seasonally adjusted economic data.

Seasonally adjusted data lets you compare values across different periods on a level playing field. When the news reports "seasonally adjusted" job numbers, this is what they mean.

Cyclical patterns

Cyclical fluctuations are longer-term waves that don't follow a fixed schedule. Business cycles of expansion and contraction are the classic example, typically lasting anywhere from 2 to 10 years.

Methods for identifying cyclical patterns include:

  • Spectral analysis: Identifies dominant frequencies in the data.
  • Band-pass filters (such as the Hodrick-Prescott or Baxter-King filters): Isolate fluctuations within a specific frequency range.

Separating cyclical patterns from the long-term trend is genuinely difficult because both operate on longer time horizons. This ambiguity is one of the trickier aspects of decomposition.

Components of time series, Time Series Analysis

Irregular components

The irregular component is what remains after you strip out trend, seasonal, and cyclical effects. It represents random noise, one-off events, and anything the other components can't explain.

Analyzing residuals helps you spot outliers and unexpected events (a flash crash, a surprise policy announcement). Smoothing techniques and robust statistical methods can reduce the influence of extreme irregular values on your model. If your irregular component still shows patterns, that's a sign your model is missing something.

Forecasting techniques

Simple exponential smoothing

This method assigns exponentially decreasing weights to older observations, so recent data matters more than distant data. The formula:

St=αYt+(1α)St1S_t = \alpha Y_t + (1 - \alpha) S_{t-1}

The smoothing parameter α\alpha ranges from 0 to 1. A higher α\alpha makes the forecast more responsive to recent changes; a lower α\alpha produces smoother, more stable forecasts.

Simple exponential smoothing works best for series with no clear trend or seasonality. It's commonly used for short-term forecasting of financial variables like exchange rates.

Holt-Winters method

Holt-Winters extends exponential smoothing to handle both trend and seasonality. It maintains three components, each with its own smoothing parameter:

  • Level: The current baseline value of the series.
  • Trend: The current rate of increase or decrease.
  • Seasonal: The current seasonal effect.

Two variants exist:

  • Additive: Use when seasonal fluctuations are roughly constant in size regardless of the level (e.g., sales increase by 500 units every December).
  • Multiplicative: Use when seasonal fluctuations scale with the level (e.g., sales increase by 20% every December).

This method is widely applied in sales forecasting, inventory management, and financial planning.

Box-Jenkins methodology

This is the systematic framework for building ARIMA models, already introduced above. The steps in more detail:

  1. Identify candidate models by examining ACF and PACF plots and testing for stationarity (ADF test). Determine appropriate values of pp, dd, and qq.
  2. Estimate model parameters using maximum likelihood estimation.
  3. Diagnose the fitted model by checking whether residuals are white noise (use the Ljung-Box test). Also check that parameters are statistically significant.
  4. Iterate: If diagnostics reveal problems, revise the model and repeat.

Information criteria like AIC (Akaike Information Criterion) and BIC (Bayesian Information Criterion) help you compare competing models. Lower values indicate a better balance of fit and parsimony.

Statistical tests for time series

Unit root tests

Unit root tests determine whether a series is stationary. The most common is the Augmented Dickey-Fuller (ADF) test:

  • Null hypothesis: The series has a unit root (non-stationary).
  • Alternative hypothesis: The series is stationary.

If the test statistic is more negative than the critical value, you reject the null and conclude stationarity. This test is critical because running regression on non-stationary data can produce spurious regressions, where variables appear related when they aren't.

Other unit root tests include the Phillips-Perron test and the KPSS test (which reverses the null hypothesis, testing stationarity as the null).

Granger causality test

This test asks: does knowing past values of series X improve your forecast of series Y, beyond what Y's own past values provide?

  • Null hypothesis: X does not Granger-cause Y.
  • Alternative hypothesis: X Granger-causes Y.

The test compares a restricted model (Y predicted by its own lags only) against an unrestricted model (Y predicted by its own lags plus lags of X) using an F-test.

A crucial caveat: Granger causality is about predictive content, not true causation. If oil prices Granger-cause airline stock returns, that means oil prices contain useful forecasting information, not necessarily that oil prices directly cause stock movements.

Cointegration analysis

Two non-stationary series are cointegrated if a linear combination of them is stationary. In other words, they may wander individually, but they move together in the long run.

For example, the prices of Coca-Cola and Pepsi stock might each be non-stationary, but the spread between them could be stationary because the two companies face similar economic forces.

Common tests:

  • Engle-Granger two-step method: Regress one series on the other, then test the residuals for stationarity.
  • Johansen test: Tests for multiple cointegrating relationships simultaneously in a multivariate setting.

Cointegration is the foundation of pairs trading strategies and supports error correction models (ECMs) that capture both short-term dynamics and long-term equilibrium relationships.

Applications in finance

Stock price prediction

Time series models applied to historical price data attempt to forecast future stock prices. Common approaches include ARIMA models, machine learning algorithms, and neural networks. Technical analysis indicators like moving averages and the relative strength index (RSI) are often incorporated as features.

The challenge here is significant: the efficient market hypothesis suggests that prices already reflect available information, making consistent prediction difficult. Unexpected events (earnings surprises, geopolitical shocks) add further unpredictability.

Volatility forecasting

Predicting future volatility is arguably more tractable than predicting price direction. GARCH models and their variants are the workhorses here.

Another approach uses implied volatility extracted from option prices, which provides a market-based, forward-looking volatility estimate. The VIX index, for example, reflects implied volatility of S&P 500 options.

Volatility forecasts feed directly into options pricing, risk management, and portfolio optimization.

Components of time series, Time Series Analysis

Risk assessment

Time series models underpin key risk metrics:

  • Value at Risk (VaR): Estimates the maximum expected loss over a given time horizon at a specified confidence level (e.g., "There's a 99% chance the portfolio won't lose more than $2 million tomorrow").
  • Expected Shortfall (ES): Measures the average loss in the worst-case scenarios beyond the VaR threshold.

Stress testing and scenario analysis use time series forecasts to simulate extreme market conditions. Monte Carlo simulations generate thousands of possible future paths based on fitted time series models.

Portfolio optimization

Time series analysis informs portfolio construction in several ways:

  • Mean-variance optimization uses historical return series and covariance matrices to find efficient portfolios.
  • Time-varying correlation models (like DCC-GARCH) capture the fact that asset correlations change over time, especially during crises when correlations tend to spike.
  • The Black-Litterman model blends time series forecasts with subjective investor views to produce more stable portfolio weights.

These tools support dynamic asset allocation, where portfolio weights are adjusted as market conditions evolve.

Time series visualization

Line plots and scatter plots

Line plots are the most natural way to display time series data: the x-axis shows time, the y-axis shows the variable, and connected points reveal trends and patterns at a glance. Adding trend lines or smoothing curves (like a 200-day moving average overlaid on daily stock prices) makes patterns clearer.

Scatter plots display the relationship between two variables and help identify correlations, clusters, and outliers. For instance, plotting returns of two assets against each other can reveal whether they tend to move together.

Autocorrelation function plots

ACF plots display the autocorrelation coefficient at each lag on the y-axis, with lag number on the x-axis. Dashed horizontal lines typically show 95% confidence intervals. Spikes that extend beyond these lines indicate statistically significant autocorrelation at that lag.

What to look for:

  • Slow decay in the ACF suggests non-stationarity or a strong AR component.
  • Sharp cutoff after lag qq suggests an MA(q) process.
  • Periodic spikes (e.g., at lags 12, 24, 36 for monthly data) indicate seasonality.

Partial autocorrelation function plots

PACF plots have the same structure as ACF plots but show partial autocorrelation coefficients. The key difference is that each bar represents the direct correlation at that lag, with the effects of shorter lags removed.

What to look for:

  • Sharp cutoff after lag pp suggests an AR(p) process.
  • Used alongside the ACF plot to distinguish AR from MA behavior and select appropriate ARIMA model orders.

Advanced time series concepts

Vector autoregression (VAR)

VAR models extend the AR framework to multiple time series simultaneously. Each variable is modeled as a linear function of its own past values and the past values of every other variable in the system.

For example, a VAR model might capture how interest rates, inflation, and GDP growth interact over time. Key tools for interpreting VAR models include:

  • Impulse response functions: Show how a shock to one variable propagates through the system over time.
  • Variance decomposition: Reveals what fraction of each variable's forecast error is attributable to shocks in the other variables.

Multivariate time series analysis

Beyond VAR, multivariate techniques include:

  • Vector Error Correction Models (VECM): Used when variables are cointegrated. The VECM captures both short-term dynamics and the long-run equilibrium relationship.
  • Dynamic factor models: Reduce a large number of time series to a smaller set of common factors, useful when you have many correlated financial variables.

These methods support portfolio analysis, risk management, and macroeconomic modeling where multiple variables interact.

Spectral analysis

Spectral analysis moves from the time domain to the frequency domain, decomposing a time series into cyclical components at different frequencies.

The Fourier transform converts time-domain data into a frequency representation. The periodogram and spectral density estimates show which frequencies (cycle lengths) carry the most power in the data.

This is useful for detecting hidden periodicities in financial data, such as identifying whether a market index has a dominant cycle of, say, 40 months that might correspond to a business cycle.

Software tools for time series

R packages for time series

  • stats: Built-in functions for basic time series work (arima, acf, pacf)
  • forecast: Advanced forecasting methods including auto.arima, ets, and tbats
  • tseries: Unit root tests (ADF test) and other time series utilities
  • xts and zoo: Flexible time series object classes for handling irregular and regular time series
  • rugarch: Comprehensive GARCH modeling for volatility analysis

Python libraries for time series

  • pandas: Core data manipulation with strong time series indexing and resampling support
  • statsmodels: ARIMA, VAR, unit root tests, and other econometric tools
  • scikit-learn: Machine learning algorithms that can be applied to time series features
  • prophet (by Meta): Designed for business forecasting with automatic handling of seasonality and holidays
  • pmdarima: Implements auto-ARIMA for automatic model order selection, similar to R's auto.arima

Commercial software options

  • SAS Time Series Studio: Enterprise-grade time series analysis and forecasting
  • MATLAB Financial Toolbox: Specialized functions for financial time series and econometrics
  • EViews: Focused on econometric analysis, popular in academic research
  • Tableau: Interactive visualization of time series data for business intelligence
  • Bloomberg Terminal: Includes time series tools integrated with real-time financial market data