Moving Averages
Simple moving averages calculation
A simple moving average (SMA) smooths out short-term fluctuations by taking the average of a fixed number of past observations. That fixed number is called the window size or order, often written as .
The formula for the SMA of order at time :
where is the observed value at time . Every observation in the window gets equal weight of .
Example: 3-period SMA
Given observations: , , ,
- At :
- At :
Notice that each new SMA value drops the oldest observation and adds the newest one. This "sliding window" is what makes it a moving average.

Weighted moving averages computation
A weighted moving average (WMA) lets you assign different weights to past observations, typically giving more importance to recent values. The key constraint is that all weights must sum to 1.
The formula for the WMA of order at time :
where is the weight for the observation at time , and .
Example: 3-period WMA with weights , ,
Given the same observations: , , ,
- At :
- At :
Compare to . The WMA reacts more strongly to the recent jump to 14 because it places half the total weight on the most recent observation. This is the core trade-off: a WMA tracks recent changes more closely, but it's also more sensitive to noise.

Advantages vs limitations of moving averages
- Advantages
- Simple to understand and implement
- Smooths out short-term fluctuations, making underlying trends easier to see
- Helps identify trend direction and potential reversals
- Limitations
- Always a lagging indicator because it relies entirely on past data
- Can miss sudden shifts like outliers or structural breaks
- Sensitive to the choice of order : too small and you get noisy output, too large and you over-smooth real changes
- Does not account for seasonality, cyclical patterns, or other complex structure on its own
Order selection for moving averages
Choosing the right is about balancing smoothness against responsiveness. There's no single correct answer; it depends on your data and your goal.
Factors to consider:
- Seasonality: If your data has a seasonal cycle, set equal to the seasonal period (e.g., for monthly data with a yearly cycle). This averages out the seasonal effect.
- Noise level: Noisier data benefits from a larger to smooth out random variation.
- Trend stability: If the trend is steady, a larger works well. If the trend shifts frequently, a smaller keeps you closer to the current behavior.
- Responsiveness: Smaller orders react faster to recent changes but smooth less effectively.
In practice, you should try several values of and compare results. Use out-of-sample testing (hold back some data, forecast it, and measure error) to pick the order that generalizes best rather than just fitting the historical data well.