Fiveable

🥖Linear Modeling Theory Unit 11 Review

QR code for Linear Modeling Theory practice questions

11.1 Two-Way ANOVA Model

11.1 Two-Way ANOVA Model

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🥖Linear Modeling Theory
Unit & Topic Study Guides

Two-way ANOVA: Concept and Purpose

Understanding the Basics

Two-way ANOVA examines how two categorical independent variables (factors) jointly affect a continuous dependent variable. Where one-way ANOVA tests the effect of a single factor, two-way ANOVA handles two factors at once and, critically, tests whether those factors interact with each other.

There are three effects you're testing in every two-way ANOVA:

  • Main effect of Factor A: the average effect of one factor on the dependent variable, collapsing across levels of the other factor. For example, the effect of soil type on plant growth, averaging over all fertilizer types.
  • Main effect of Factor B: the same idea for the second factor. For example, the effect of fertilizer type on plant growth, averaging over all soil types.
  • Interaction effect (A × B): whether the effect of one factor depends on the level of the other factor. For example, sandy soil might boost growth with organic fertilizer but hurt growth with synthetic fertilizer. That pattern, where the effect of one factor changes depending on the other, is an interaction.

Determining Significant Differences

The model tests whether the means of the dependent variable differ significantly across levels of each factor and across their combinations.

Consider comparing average test scores based on teaching method (lecture vs. discussion) and student background (science vs. humanities). Two-way ANOVA lets you ask all three questions at once: Does teaching method matter? Does background matter? Does the advantage of one teaching method change depending on background?

One-way ANOVA could only answer one of those questions at a time, say, whether teaching method alone affects scores. Two-way ANOVA handles the full picture in a single analysis.

Two-way ANOVA: Concept and Purpose, R Tutorial Series: R Tutorial Series: Two-Way Omnibus ANOVA

Two-way ANOVA: Mathematical Model

Model Components

The two-way ANOVA model decomposes each observation into additive components:

Yijk=μ+αi+βj+(αβ)ij+ϵijkY_{ijk} = \mu + \alpha_i + \beta_j + (\alpha\beta)_{ij} + \epsilon_{ijk}

Each term represents a distinct source of variation:

  • YijkY_{ijk}: the observed value for the kk-th observation in level ii of Factor A and level jj of Factor B (e.g., the test score of the kk-th student in the ii-th teaching method and jj-th background group)
  • μ\mu: the grand mean of the dependent variable across all observations
  • αi\alpha_i: the main effect of Factor A at level ii, representing how much level ii of Factor A deviates from the grand mean
  • βj\beta_j: the main effect of Factor B at level jj, representing how much level jj of Factor B deviates from the grand mean
  • (αβ)ij(\alpha\beta)_{ij}: the interaction effect for the specific combination of level ii of A and level jj of B. This captures any deviation in the cell mean that isn't explained by the two main effects alone.
  • ϵijk\epsilon_{ijk}: the random error term, assumed ϵijkN(0,σ2)\epsilon_{ijk} \sim N(0, \sigma^2)

The model is subject to the constraints iαi=0\sum_i \alpha_i = 0, jβj=0\sum_j \beta_j = 0, i(αβ)ij=0\sum_i (\alpha\beta)_{ij} = 0 for all jj, and j(αβ)ij=0\sum_j (\alpha\beta)_{ij} = 0 for all ii. These sum-to-zero constraints ensure the parameters are identifiable and that the effects represent deviations from the grand mean.

Two-way ANOVA: Concept and Purpose, R Tutorial Series: R Tutorial Series: Two-Way ANOVA with Pairwise Comparisons

Hypothesis Testing and Interpretation

Two-way ANOVA involves three separate null hypotheses, each tested with its own F-statistic:

  1. H0:αi=0 for all iH_0: \alpha_i = 0 \text{ for all } i (no main effect of Factor A)
  2. H0:βj=0 for all jH_0: \beta_j = 0 \text{ for all } j (no main effect of Factor B)
  3. H0:(αβ)ij=0 for all i,jH_0: (\alpha\beta)_{ij} = 0 \text{ for all } i, j (no interaction effect)

Each alternative hypothesis states that at least one of the respective effects is nonzero.

The F-test for each effect compares the variance explained by that term (its mean square) to the unexplained variance (mean square error). A large F-ratio suggests the effect explains more variability than you'd expect from random noise alone.

Interpretation depends on which effects are significant:

  • If a main effect is significant but the interaction is not, you can interpret that main effect straightforwardly. For instance, if teaching method is significant (lecture scores higher than discussion) and there's no interaction, lecture outperforms discussion regardless of student background.
  • If the interaction is significant, you need to be cautious interpreting main effects in isolation. The interaction tells you that the effect of one factor changes across levels of the other, so reporting a single "main effect" can be misleading. In that case, examine the cell means or simple effects to understand the pattern.

Assumptions of Two-way ANOVA

Key Assumptions

Two-way ANOVA relies on the same core assumptions as other linear models, applied to the cell structure of the design:

  • Independence: Observations within and across all cells must be independent. One student's test score should not influence another's. This is primarily a design issue, not something you can fix statistically after the fact.
  • Normality: The residuals (YijkY^ijY_{ijk} - \hat{Y}_{ij}) should be approximately normally distributed within each cell. With balanced designs and reasonable sample sizes, the F-test is fairly robust to moderate departures from normality.
  • Homogeneity of variances (homoscedasticity): The error variance σ2\sigma^2 should be the same across all cells. If the variability of test scores in the lecture/science group is much larger than in the discussion/humanities group, this assumption is violated.
  • No influential outliers: Extreme values in any cell can distort the group means and inflate or mask effects.
  • Fixed effects: The levels of both factors are specifically chosen by the researcher, not randomly sampled from a larger population. If levels are randomly sampled, you'd need a random-effects or mixed-effects model instead.

Assessing Assumption Validity

Each assumption can be checked with specific tools:

  1. Independence is ensured through proper experimental design, particularly random assignment of subjects to conditions. No statistical test can fully verify it after data collection.
  2. Normality can be assessed by examining residuals with Q-Q plots, histograms, or formal tests like the Shapiro-Wilk test. Check residuals within each cell if sample sizes permit, or check the overall residual distribution.
  3. Homogeneity of variances can be tested with Levene's test or by visually inspecting a residual-vs.-fitted-values plot for a consistent spread across groups.
  4. Outliers can be spotted using boxplots of residuals by cell, standardized residuals (values beyond ±3\pm 3 are suspect), or Cook's distance for influential points.

Violations of these assumptions can lead to inflated or deflated Type I error rates, meaning your p-values may not be trustworthy. Mild violations, especially of normality, are often tolerable with balanced designs. Serious heteroscedasticity or non-independence is more problematic and may require data transformations, robust methods, or a different modeling approach.