Fiveable

Intro to Time Series Unit 12 Review

QR code for Intro to Time Series practice questions

12.2 Kalman filter algorithm

12.2 Kalman filter algorithm

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
Intro to Time Series
Unit & Topic Study Guides

Kalman Filter Algorithm

The Kalman filter estimates the hidden state of a dynamic system from noisy observations. It works recursively: at each time step, it predicts where the system should be, then corrects that prediction using new data. This predict-correct cycle is the core of the algorithm, and understanding it well will make state-space models click.

Purpose

The Kalman filter solves a fundamental problem: you can't directly observe the true state of a system, but you can observe noisy measurements related to that state. The filter gives you the best possible estimate (in a least-squares sense) by doing two things at every time step:

  • Predicting the next state based on a model of how the system evolves
  • Updating that prediction when a new observation arrives, weighting the correction by how much you trust the observation versus the model

Uncertainty is tracked throughout using error covariance matrices, so you always know how confident the estimate is. Applications range from GPS tracking to weather forecasting to financial modeling.

Purpose of Kalman filter algorithm, Kalman filter - Wikipedia

The State-Space Setup

Before running the filter, you need two equations that define your system.

State transition equation (how the hidden state evolves):

xt=Fxt1+wtx_t = F x_{t-1} + w_t, where wtN(0,Q)w_t \sim N(0, Q)

  • FF is the state transition matrix, encoding how the previous state maps to the current one
  • wtw_t is process noise, capturing model imperfections, with covariance QQ

Observation equation (how measurements relate to the state):

zt=Hxt+vtz_t = H x_t + v_t, where vtN(0,R)v_t \sim N(0, R)

  • HH is the observation matrix, mapping the state into measurement space
  • vtv_t is observation noise, with covariance RR

Together, FF, QQ, HH, and RR fully specify the linear Gaussian model the Kalman filter operates on.

Purpose of Kalman filter algorithm, How a Kalman filter works, in pictures - 灰信网(软件开发博客聚合)

Prediction and Update Steps

This is the heart of the algorithm. Each time step has two phases.

Prediction step (project the state and covariance forward):

  1. Predicted state estimate: x^tt1=Fx^t1t1\hat{x}_{t|t-1} = F\hat{x}_{t-1|t-1}
  2. Predicted error covariance: Ptt1=FPt1t1FT+QP_{t|t-1} = FP_{t-1|t-1}F^T + Q

The predicted covariance grows here because process noise (QQ) gets added. The model alone always makes you less certain.

Update step (correct the prediction using the new observation ztz_t):

  1. Kalman gain: Kt=Ptt1HT(HPtt1HT+R)1K_t = P_{t|t-1}H^T(HP_{t|t-1}H^T + R)^{-1}

  2. Updated state estimate: x^tt=x^tt1+Kt(ztHx^tt1)\hat{x}_{t|t} = \hat{x}_{t|t-1} + K_t(z_t - H\hat{x}_{t|t-1})

  3. Updated error covariance: Ptt=(IKtH)Ptt1P_{t|t} = (I - K_tH)P_{t|t-1}

The term (ztHx^tt1)(z_t - H\hat{x}_{t|t-1}) is called the innovation (or measurement residual). It's the gap between what you observed and what you predicted you'd observe. The Kalman gain KtK_t controls how aggressively you correct toward the observation.

How to think about the Kalman gain: When observation noise RR is small relative to the predicted uncertainty Ptt1P_{t|t-1}, the gain is large and the filter trusts the measurement more. When RR is large (noisy sensors), the gain shrinks and the filter leans on the model prediction instead. The filter automatically balances these two sources of information.

Prediction vs. Updating vs. Smoothing

These three operations differ in which observations they use:

  • Prediction (filtering forward): Estimates the state at time tt using only observations up to t1t-1. This is what you do before the new measurement arrives.
  • Updating (filtering): Estimates the state at time tt using observations up to and including tt. This is the corrected estimate after incorporating the latest data.
  • Smoothing: Estimates the state at time tt using all observations, including those from times after tt. Since it uses future information, smoothing produces more accurate estimates but can only be done retrospectively.

Two common smoothing variants:

  • Fixed-interval smoothing re-estimates all states over a complete time window after all data is collected
  • Fixed-lag smoothing estimates the state a fixed number of steps in the past, useful when you can tolerate a small delay

Implementation Steps

To implement the Kalman filter in practice (using Python with NumPy, MATLAB, R, etc.):

  1. Define the model matrices FF, QQ, HH, and RR based on your system

  2. Set initial conditions: choose an initial state estimate x^00\hat{x}_{0|0} and initial error covariance P00P_{0|0}. If you're unsure about the initial state, set P00P_{0|0} large to reflect that uncertainty

  3. Loop through each time step:

    • Run the prediction step to get x^tt1\hat{x}_{t|t-1} and Ptt1P_{t|t-1}
    • Compute the Kalman gain KtK_t
    • Run the update step to get x^tt\hat{x}_{t|t} and PttP_{t|t}
  4. Store results (state estimates and covariances) at each step for later analysis or plotting

  5. Validate by testing on simulated data where you know the true state, then compare your filtered estimates against ground truth

A common sanity check: the innovation sequence (ztHx^tt1)(z_t - H\hat{x}_{t|t-1}) should look like white noise if your model is well-specified. If it shows patterns, your model matrices likely need adjustment.