Kalman Filter Algorithm
The Kalman filter estimates the hidden state of a dynamic system from noisy observations. It works recursively: at each time step, it predicts where the system should be, then corrects that prediction using new data. This predict-correct cycle is the core of the algorithm, and understanding it well will make state-space models click.
Purpose
The Kalman filter solves a fundamental problem: you can't directly observe the true state of a system, but you can observe noisy measurements related to that state. The filter gives you the best possible estimate (in a least-squares sense) by doing two things at every time step:
- Predicting the next state based on a model of how the system evolves
- Updating that prediction when a new observation arrives, weighting the correction by how much you trust the observation versus the model
Uncertainty is tracked throughout using error covariance matrices, so you always know how confident the estimate is. Applications range from GPS tracking to weather forecasting to financial modeling.

The State-Space Setup
Before running the filter, you need two equations that define your system.
State transition equation (how the hidden state evolves):
, where
- is the state transition matrix, encoding how the previous state maps to the current one
- is process noise, capturing model imperfections, with covariance
Observation equation (how measurements relate to the state):
, where
- is the observation matrix, mapping the state into measurement space
- is observation noise, with covariance
Together, , , , and fully specify the linear Gaussian model the Kalman filter operates on.

Prediction and Update Steps
This is the heart of the algorithm. Each time step has two phases.
Prediction step (project the state and covariance forward):
- Predicted state estimate:
- Predicted error covariance:
The predicted covariance grows here because process noise () gets added. The model alone always makes you less certain.
Update step (correct the prediction using the new observation ):
-
Kalman gain:
-
Updated state estimate:
-
Updated error covariance:
The term is called the innovation (or measurement residual). It's the gap between what you observed and what you predicted you'd observe. The Kalman gain controls how aggressively you correct toward the observation.
How to think about the Kalman gain: When observation noise is small relative to the predicted uncertainty , the gain is large and the filter trusts the measurement more. When is large (noisy sensors), the gain shrinks and the filter leans on the model prediction instead. The filter automatically balances these two sources of information.
Prediction vs. Updating vs. Smoothing
These three operations differ in which observations they use:
- Prediction (filtering forward): Estimates the state at time using only observations up to . This is what you do before the new measurement arrives.
- Updating (filtering): Estimates the state at time using observations up to and including . This is the corrected estimate after incorporating the latest data.
- Smoothing: Estimates the state at time using all observations, including those from times after . Since it uses future information, smoothing produces more accurate estimates but can only be done retrospectively.
Two common smoothing variants:
- Fixed-interval smoothing re-estimates all states over a complete time window after all data is collected
- Fixed-lag smoothing estimates the state a fixed number of steps in the past, useful when you can tolerate a small delay
Implementation Steps
To implement the Kalman filter in practice (using Python with NumPy, MATLAB, R, etc.):
-
Define the model matrices , , , and based on your system
-
Set initial conditions: choose an initial state estimate and initial error covariance . If you're unsure about the initial state, set large to reflect that uncertainty
-
Loop through each time step:
- Run the prediction step to get and
- Compute the Kalman gain
- Run the update step to get and
-
Store results (state estimates and covariances) at each step for later analysis or plotting
-
Validate by testing on simulated data where you know the true state, then compare your filtered estimates against ground truth
A common sanity check: the innovation sequence should look like white noise if your model is well-specified. If it shows patterns, your model matrices likely need adjustment.