Fiveable

🎲Intro to Statistics Unit 13 Review

QR code for Intro to Statistics practice questions

13.4 Test of Two Variances

13.4 Test of Two Variances

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🎲Intro to Statistics
Unit & Topic Study Guides

Test of Two Variances

Comparison of Population Variances

The F-test for two variances answers a specific question: do two populations have the same spread, or is one significantly more variable than the other? This matters because many statistical methods (like ANOVA and two-sample t-tests) assume equal variances, so you need a way to check that assumption before proceeding.

You'd use this test when comparing variability between two groups, such as test score spread between two classrooms, or blood pressure variability between a treatment group and a control group.

Assumptions of the F-test:

  • Both samples are independent and randomly selected
  • Both populations are normally distributed

Violations of these assumptions can lead to inaccurate results, especially with small sample sizes (n < 30). The F-test is notably sensitive to non-normality, more so than many other tests. Outliers can also heavily influence the result because variance calculations involve squaring deviations, which amplifies extreme values.

If assumptions are violated, alternative tests like Levene's test or the Brown-Forsythe test are more appropriate because they're less sensitive to departures from normality.

Comparison of population variances, Hypothesis Test for Difference in Two Population Proportions (4 of 6) | Concepts in Statistics

Calculation of the F-Ratio

The F-ratio (also called the variance ratio) is simply the ratio of the two sample variances:

F=s12s22F = \frac{s_1^2}{s_2^2}

By convention, you place the larger sample variance in the numerator and the smaller in the denominator. This ensures the F-ratio is always ≥ 1, which simplifies comparison with the critical value.

Steps to calculate and evaluate the F-ratio:

  1. Compute the sample variance for each group (s12s_1^2 and s22s_2^2).

  2. Place the larger variance in the numerator: F=slarger2ssmaller2F = \frac{s_{\text{larger}}^2}{s_{\text{smaller}}^2}.

  3. Find the degrees of freedom: df1=n11df_1 = n_1 - 1 (numerator) and df2=n21df_2 = n_2 - 1 (denominator), where n1n_1 is the sample size associated with the larger variance.

  4. Look up the critical F-value from an F-distribution table using your significance level (typically 0.05 or 0.01) and the two degrees of freedom.

  5. Compare: if your calculated F-ratio exceeds the critical F-value, the difference in variances is statistically significant.

Quick example: Suppose Group A (n = 16) has sA2=24s_A^2 = 24 and Group B (n = 21) has sB2=8s_B^2 = 8. The F-ratio is F=248=3.0F = \frac{24}{8} = 3.0, with df1=15df_1 = 15 and df2=20df_2 = 20. You'd then check whether 3.0 exceeds the critical value for those degrees of freedom at your chosen significance level.

Comparison of population variances, Why do we use a one-tailed test F-test in analysis of variance (ANOVA)? - Cross Validated

Interpretation of F-Test Results

  • If the calculated F-ratio is less than the critical F-value, you do not reject the null hypothesis. There's insufficient evidence to say the population variances differ.
  • If the calculated F-ratio is greater than the critical F-value, you reject the null hypothesis. The data suggest the population variances are unequal.

One-tailed vs. two-tailed tests:

The F-test can be run as either one-tailed or two-tailed, depending on your hypotheses.

  • A one-tailed test specifies a direction: Ha:σ12>σ22H_a: \sigma_1^2 > \sigma_2^2 (or the reverse). You're testing whether one specific population is more variable than the other.
  • A two-tailed test uses Ha:σ12σ22H_a: \sigma_1^2 \neq \sigma_2^2, meaning you're just checking whether the variances differ in either direction. For a two-tailed test at significance level α\alpha, you use α/2\alpha/2 when looking up the critical value, since you're splitting the rejection region across both tails.

The choice of alternative hypothesis determines which critical value you use and how you interpret the result, so set up your hypotheses before calculating.

Variance Equality and Statistical Assumptions

Homoscedasticity means equal variances across groups. Many statistical tests assume this, including ANOVA and regression. When this assumption holds, standard errors and confidence intervals from those tests are reliable.

Heteroscedasticity means unequal variances across groups. When this occurs, standard errors can become biased, which makes hypothesis tests and confidence intervals unreliable. That's why checking for equal variances matters before running tests that assume it.

Robustness describes how well a statistical test performs when its assumptions are violated. Some tests tolerate mild violations (especially with large, equal-sized samples), while others break down quickly. The F-test for two variances is not very robust to non-normality, which is why alternatives exist.

The folded F-test is one such alternative. It's designed to compare two population variances while being less sensitive to departures from normality, making it useful when you're uncertain whether your data are truly normally distributed or when working with small samples.