The Central Limit Theorem for Sums
The Central Limit Theorem (CLT) for sums extends the same core idea you learned for sample means: when you add up a large number of independent random variables, their sum follows an approximately normal distribution, no matter what the original population looks like. This matters because it lets you use normal distribution techniques (z-scores, probability tables) to answer questions about sums, even when the underlying data isn't normal.

The Central Limit Theorem for Sums
The CLT for sums says that if you take a large enough sample of independent, identically distributed (i.i.d.) random variables and add them up, the distribution of that sum will be approximately normal. The key requirement is that the sample size is sufficiently large. The more skewed or unusual the population distribution, the larger needs to be for the approximation to work well.
Compare this to the CLT for means: both theorems rely on the same convergence principle, but here you're looking at the total rather than the average.
Parameters of the sampling distribution for sums:
- Mean:
- Standard deviation:
where is the sample size, is the population mean, and is the population standard deviation.
Notice that the mean of the sum scales linearly with , but the standard deviation scales with . This is why variability grows more slowly than the total itself, and it's the reason sums become more predictable (relative to their size) as increases.
Mean and Standard Deviation of the Sampling Distribution
To find the center and spread of the sampling distribution for sums, plug into the two formulas above.
Example: Suppose a population has and , and you draw a sample of .
- Calculate the mean of the sum:
- Calculate the standard deviation of the sum:
So the sampling distribution for the sum of 25 observations is approximately normal with a center at 250 and a standard deviation of 10.

Z-Scores for Sums
Once you know the sampling distribution is approximately normal, you can standardize any observed sum using:
where is the observed sum, is the mean of the sampling distribution for sums, and is its standard deviation. The z-score tells you how many standard deviations a particular sum falls from the expected total.
Step-by-step process for finding a probability:
- Identify , , and from the problem.
- Compute and .
- Plug the observed (or target) sum into the z-score formula.
- Use the standard normal table (or calculator) to find the corresponding probability.
Example: Using the parameters above (, ), what's the probability the sum exceeds 265?
-
-
From the z-table,
-
So , or about 6.7%.
You can also work backward from a probability to find a percentile. For instance, to find the 90th percentile of the sum, look up and solve .
Statistical Foundations and Applications
A few related ideas tie into this theorem:
- Law of Large Numbers: As grows, the sample mean converges to the population mean. The CLT for sums builds on this by describing how the sum's distribution behaves, not just where it centers.
- Statistical inference: The CLT for sums is what justifies using normal-based methods (confidence intervals, hypothesis tests) for totals. Without it, you'd need to know the exact population distribution to calculate probabilities for sums.
- The CLT for sums and the CLT for means are directly connected. Since , any result about the sum can be translated into a result about the mean, and vice versa. They're two sides of the same theorem.