Confidence Intervals and the Student's t-Distribution
Confidence intervals using the t-distribution
When you're estimating a population mean, you often don't know the true population standard deviation (). In practice, you almost never do. Instead, you use the sample standard deviation () as an estimate, and that extra uncertainty means you need the t-distribution instead of the normal (z) distribution.
The t-distribution is used when:
- The population standard deviation is unknown (you're using instead of )
- The underlying population is approximately normal, or the sample size is large enough for the Central Limit Theorem to apply
To build a confidence interval for a single population mean:
-
Calculate the sample mean () and sample standard deviation ()
-
Determine your degrees of freedom:
-
Look up the critical t-value () for your desired confidence level and degrees of freedom (using a t-table or technology)
-
Plug into the formula:
Where:
- = sample mean
- = critical t-value
- = sample standard deviation
- = sample size
The piece is the standard error of the mean. It estimates how much your sample mean typically varies from the true population mean. Multiply it by and you get the margin of error, which tells you how wide the interval stretches on each side of .
Example: Suppose you measure the heights of 20 students and get inches and inches. For a 95% confidence interval with , the critical value is . The margin of error is inches. Your 95% confidence interval is .

Degrees of freedom in the t-distribution
Degrees of freedom () represent the number of independent pieces of information in your sample that are free to vary. For a single sample mean, . You lose one degree of freedom because the sample standard deviation is calculated using , which "uses up" one piece of information.
Degrees of freedom control the shape of the t-distribution:
- Lower (small samples): thicker tails, which means more probability in the extremes. This produces larger critical values and therefore wider confidence intervals. The extra width accounts for the added uncertainty of estimating with a small sample.
- Higher (larger samples): the tails thin out and the distribution looks more and more like the standard normal curve.
For example, a t-distribution with has noticeably thicker tails than one with . By the time you reach or so, the t-distribution is nearly identical to the standard normal.

T-distribution vs. normal distribution
Similarities:
- Both are symmetric and bell-shaped
- Both are centered at 0 (when standardized)
- Both are used for inference about population means
Differences:
- The t-distribution has thicker tails, meaning extreme values are more likely. This reflects the extra uncertainty from estimating with .
- The shape of the t-distribution changes depending on , while the standard normal distribution has a single fixed shape.
- You use the t-distribution when is unknown. You use the z (normal) distribution when is known, which is rare in practice.
A common rule of thumb says to use the z-distribution when , but that's a simplification. The real deciding factor is whether you know . If you're using , the t-distribution is technically correct regardless of sample size. With large , though, the t and z values are so close that it barely matters.
Hypothesis testing with the t-distribution
While this section of the course focuses on confidence intervals, it's worth seeing how the t-distribution connects to hypothesis testing. The same logic applies: you're using instead of , so you use a t-statistic instead of a z-statistic.
Steps for a one-sample t-test:
- State the null hypothesis () and alternative hypothesis
- Choose a significance level (, commonly 0.05)
- Calculate the test statistic:
-
Find the p-value or compare the t-statistic to the critical value from the t-distribution with
-
If the p-value is less than (or the test statistic falls in the rejection region), reject
The t-statistic measures how many standard errors your sample mean is from the hypothesized value . A larger absolute value means stronger evidence against the null hypothesis.