Fiveable

🥖Linear Modeling Theory Unit 11 Review

QR code for Linear Modeling Theory practice questions

11.2 Main Effects and Interaction

11.2 Main Effects and Interaction

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🥖Linear Modeling Theory
Unit & Topic Study Guides

Main Effects vs Interaction Effects

Defining Main Effects and Interaction Effects

A main effect is the overall effect of one factor on the dependent variable, averaging across the levels of the other factor. An interaction effect occurs when the effect of one factor on the dependent variable changes depending on the level of the other factor.

Consider a concrete example: suppose you're studying how a drug (Factor A) affects blood pressure (the outcome), with patient age group (Factor B: young vs. old) as the second factor. A main effect of the drug would mean that, on average across both age groups, the drug changes blood pressure. An interaction between drug and age would mean the drug's effect on blood pressure differs for young patients compared to old patients.

In a two-way ANOVA, you're testing three things simultaneously:

  • Main effect of Factor A (e.g., drug vs. placebo)
  • Main effect of Factor B (e.g., young vs. old)
  • A × B interaction effect (does the drug effect depend on age?)

Complexity of Interpretation with Interaction Effects

A significant interaction complicates how you read the main effects. If the drug lowers blood pressure by 20 mmHg in older patients but only 2 mmHg in younger patients, reporting the "average drug effect" across both groups is misleading. The main effect is technically real, but it doesn't tell the full story.

The rule of thumb: when a significant interaction is present, interpret the main effects in the context of that interaction, not in isolation. You'll often need to look at simple effects (the effect of one factor at each level of the other) rather than relying on the marginal means alone.

Defining Main Effects and Interaction Effects, R Tutorial Series: R Tutorial Series: Two-Way ANOVA with Interactions and Simple Main Effects

Interpreting Main Effects and Interactions

Interpreting Main Effects

Interpreting a main effect means determining whether the levels of one factor produce significantly different outcomes on the dependent variable, averaged across the levels of the other factor.

For example, in a study on teaching method (new vs. traditional) and student motivation (high vs. low) with test scores as the outcome: a main effect of teaching method would mean that, collapsing across motivation levels, the new method produces different average test scores than the traditional method.

Defining Main Effects and Interaction Effects, R Tutorial Series: R Tutorial Series: Two-Way ANOVA with Interactions and Simple Main Effects

Interpreting Interaction Effects

An interaction effect means the impact of one factor depends on the level of the other. In the teaching example, an interaction would indicate that the advantage (or disadvantage) of the new teaching method is different for high-motivation students than for low-motivation students.

Interaction plots are the best tool for visualizing this. You plot the cell means with one factor on the x-axis and separate lines for each level of the other factor.

  • Crossing lines suggest a disordinal (crossover) interaction: the direction of one factor's effect actually reverses across levels of the other factor.
  • Non-parallel but non-crossing lines suggest an ordinal interaction: the effect goes in the same direction for both groups but is stronger in one than the other.
  • Parallel lines suggest no interaction: the effect of one factor is consistent across levels of the other.

Always connect your interpretation back to the research question. A statistically significant interaction that amounts to a trivial difference in practice may not be meaningful, and vice versa.

Testing Main Effects and Interactions

Hypothesis Testing in Two-Way ANOVA

A two-way ANOVA tests three null hypotheses:

  1. Main effect of Factor A: H0H_0: All marginal means of Factor A are equal (no effect of A, averaging over B).
  2. Main effect of Factor B: H0H_0: All marginal means of Factor B are equal (no effect of B, averaging over A).
  3. Interaction (A × B): H0H_0: The effect of Factor A on the dependent variable does not change across levels of Factor B (and equivalently, the effect of B does not change across levels of A).

Each hypothesis is tested with its own F-ratio, which compares the variance the effect explains to the residual (error) variance.

Calculating and Interpreting F-Ratios and P-Values

The F-ratio for any effect in the model follows the same logic:

F=MSeffectMSerrorF = \frac{MS_{effect}}{MS_{error}}

where MSeffectMS_{effect} is the mean square for that effect (its sum of squares divided by its degrees of freedom) and MSerrorMS_{error} is the mean square for the residual. A larger F means the effect explains more variance relative to noise.

The steps for testing each effect:

  1. Partition the total sum of squares into components: SSASS_A, SSBSS_B, SSABSS_{AB}, and SSerrorSS_{error}.
  2. Divide each sum of squares by its degrees of freedom to get the mean squares.
  3. Compute the F-ratio for each effect using the formula above.
  4. Compare each F to the F-distribution with the appropriate degrees of freedom to obtain a p-value.

A p-value below your significance threshold (typically 0.05) provides evidence to reject that null hypothesis.

Reporting example: "There was a significant main effect of teaching method, F(1,100)=12.34F(1, 100) = 12.34, p<.001p < .001, and a significant interaction between teaching method and student motivation, F(1,100)=5.67F(1, 100) = 5.67, p=.019p = .019."

Notice that each F-test gets its own degrees of freedom. The first number in F(1,100)F(1, 100) is the effect's degrees of freedom; the second is the error degrees of freedom. Always report both, along with the F-value and p-value, for each of the three tests.