upgrade
upgrade

📊AP Statistics

Statistical Inference Methods

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Statistical inference is the heart of AP Statistics—it's where you move from describing data to making claims about entire populations based on samples. Every confidence interval you construct and every hypothesis test you run connects back to the fundamental question: How confident can we be that our sample tells us something true about the world? You'll be tested on your ability to choose the right inference procedure, verify conditions, interpret results in context, and understand the probabilistic reasoning behind your conclusions.

The methods in this guide aren't isolated techniques to memorize separately. They form an interconnected framework built on sampling distributions, standard error, and probability. Whether you're estimating a proportion, comparing two means, or testing for independence in a two-way table, you're applying the same core logic: quantify uncertainty, check conditions, and draw conclusions. Don't just memorize formulas—know what concept each method illustrates and when to apply it.


Estimation: Confidence Intervals

Confidence intervals answer the question "What's a reasonable range for the true parameter?" They combine a point estimate with a margin of error to capture uncertainty. The key insight: we're not saying the parameter is definitely in the interval—we're saying our method produces intervals that capture the true parameter a certain percentage of the time.

Confidence Intervals for Proportions

  • One-sample z-interval uses p^±zp^(1p^)n\hat{p} \pm z^* \sqrt{\frac{\hat{p}(1-\hat{p})}{n}}—the standard error shrinks as sample size increases
  • Success-failure condition requires np^10n\hat{p} \geq 10 and n(1p^)10n(1-\hat{p}) \geq 10 to ensure the sampling distribution is approximately normal
  • Interpretation must reference the method: "We are 95% confident that the true population proportion is between..." never says the parameter "probably" falls in the interval

Confidence Intervals for Means

  • T-intervals replace z* with t* because we estimate the population standard deviation with s—this adds uncertainty reflected in wider intervals
  • Degrees of freedom (df=n1df = n - 1 for one sample) determine which t-distribution to use; smaller df means heavier tails and wider intervals
  • Robustness to non-normality increases with sample size due to the Central Limit Theorem, but always check for strong skewness or outliers with small samples

Confidence Intervals for Differences

  • Two-proportion z-interval uses SE=p^1(1p^1)n1+p^2(1p^2)n2SE = \sqrt{\frac{\hat{p}_1(1-\hat{p}_1)}{n_1} + \frac{\hat{p}_2(1-\hat{p}_2)}{n_2}}—note you add variances, not standard errors
  • If zero is in the interval, you cannot conclude a significant difference exists between the populations
  • Direction matters: interpret which group is larger based on how you defined the difference (p^1p^2\hat{p}_1 - \hat{p}_2)

Compare: Confidence intervals for proportions vs. means—both use point estimate ± margin of error, but proportions use z* (known sampling distribution shape) while means use t* (estimated variability). On FRQs, always specify which procedure you're using and why.


Decision-Making: Hypothesis Testing Framework

Hypothesis testing formalizes the question "Is this result surprising enough to reject chance?" You assume the null hypothesis is true, calculate how unlikely your observed data would be, and make a decision. The logic is indirect: you're not proving the alternative—you're assessing whether the null is plausible.

Null and Alternative Hypotheses

  • Null hypothesis (H0H_0) represents "no effect" or "no difference"—it's the claim you're testing against, always stated with equality (==, \leq, or \geq)
  • Alternative hypothesis (HaH_a) is what you're trying to find evidence for—can be one-sided (<< or >>) or two-sided (\neq)
  • You never "accept" H0H_0—you either reject it or fail to reject it; absence of evidence isn't evidence of absence

P-Values

  • Definition: the probability of observing results as extreme or more extreme than the sample data, assuming H0H_0 is true
  • Small p-values (typically <0.05< 0.05) indicate the observed data would be unusual under H0H_0, providing evidence against it
  • P-value is NOT the probability that H0H_0 is true—this is a common misconception that will cost you points

Significance Level (α\alpha)

  • Pre-set threshold (usually 0.05 or 0.01) determines when you reject H0H_0—if p-value α\leq \alpha, reject
  • Choosing α\alpha involves weighing consequences: lower α\alpha reduces false positives but increases false negatives
  • Statistical significance ≠ practical significance—a tiny, meaningless difference can be "significant" with large enough samples

Compare: P-value vs. significance level—p-value is calculated from your data, while α\alpha is chosen before collecting data. Think of α\alpha as your threshold and p-value as your evidence. FRQs often ask you to explain what a p-value means in context—never say it's the probability the null is true.


Comparing Groups: Tests for Means

When comparing numerical outcomes across groups, you're testing whether observed differences reflect real population differences or just sampling variability. The test statistic measures how many standard errors your observed difference is from the null hypothesis value.

One-Sample T-Test

  • Tests whether a population mean equals a hypothesized value using t=xˉμ0s/nt = \frac{\bar{x} - \mu_0}{s/\sqrt{n}}
  • Conditions: random sample, independence (10% condition), and normality (check with graphs for small samples)
  • Degrees of freedom = n1n - 1; use t-distribution to find p-value

Two-Sample T-Test

  • Compares means from two independent groups—the standard error combines variability from both samples
  • Don't pool variances unless specifically told to assume equal population variances (AP Stats typically uses unpooled)
  • Degrees of freedom calculation is complex; use calculator output or the conservative df=min(n11,n21)df = \min(n_1-1, n_2-1)

Paired T-Test

  • Used when observations are naturally paired (before/after, matched subjects)—analyze the differences as a single sample
  • Reduces variability by controlling for subject-to-subject differences; often more powerful than two-sample test
  • Conditions apply to the differences, not the original measurements—check that differences are approximately normal

Compare: Two-sample vs. paired t-test—both compare two groups, but paired tests use the same subjects measured twice (or matched pairs), while two-sample tests use independent groups. Choosing the wrong test is a common FRQ error; always identify whether data are paired or independent.


Categorical Analysis: Chi-Square Tests

Chi-square tests assess whether observed categorical data match expected patterns. The test statistic χ2=(OE)2E\chi^2 = \sum \frac{(O-E)^2}{E} measures total squared deviation from expected counts, standardized by expected counts.

Chi-Square Goodness-of-Fit

  • Tests whether a single categorical variable follows a hypothesized distribution—comparing observed counts to expected counts
  • Expected counts come from the hypothesized proportions multiplied by sample size
  • Degrees of freedom = number of categories − 1

Chi-Square Test for Independence

  • Tests whether two categorical variables are associated in a single population sampled randomly
  • Expected counts calculated as row total×column totalgrand total\frac{\text{row total} \times \text{column total}}{\text{grand total}} for each cell
  • Null hypothesis: the variables are independent (no association); alternative: variables are associated

Chi-Square Test for Homogeneity

  • Tests whether the distribution of a categorical variable is the same across different populations
  • Data collection differs from independence: samples are taken separately from each population
  • Same calculation as independence test, but different context and hypotheses

Compare: Independence vs. homogeneity—both use identical calculations and the same χ2\chi^2 formula, but independence tests one sample for association between variables, while homogeneity tests multiple populations for identical distributions. The FRQ will signal which one by describing how data were collected.


Relationships: Regression Inference

Regression inference extends correlation and line-fitting to make claims about population relationships. You're testing whether the true slope β\beta differs from zero—if it does, there's a linear relationship between variables in the population.

T-Test for Slope

  • Tests H0:β=0H_0: \beta = 0 (no linear relationship) using t=b0SEbt = \frac{b - 0}{SE_b} where b is the sample slope
  • Conditions: linear relationship (check residual plot), independent observations, normal residuals, equal variance (constant spread in residual plot)
  • Degrees of freedom = n2n - 2 for simple linear regression

Confidence Interval for Slope

  • Estimates the true population slope with b±tSEbb \pm t^* \cdot SE_b
  • If the interval contains zero, you cannot conclude a significant linear relationship exists
  • Interpretation: "We are 95% confident that for each one-unit increase in x, y changes by between [lower] and [upper] units on average"

Correlation Coefficient

  • Pearson's r measures strength and direction of linear association; r2r^2 gives proportion of variance explained
  • r is unitless and ranges from −1 to +1; outliers can dramatically affect its value
  • Correlation ≠ causation—even strong correlations don't prove one variable causes changes in another

Compare: Correlation (r) vs. slope (b)—both measure linear relationships, but r is standardized (unitless, between −1 and +1) while b has units and tells you the actual rate of change. You can have a strong correlation with a small slope or vice versa. FRQs may ask you to interpret both.


Understanding Errors and Power

Every hypothesis test risks making mistakes. Understanding error types and power helps you interpret results appropriately and design better studies. The key trade-off: reducing one type of error typically increases the other, unless you increase sample size.

Type I and Type II Errors

  • Type I error (α\alpha): rejecting H0H_0 when it's actually true—a "false positive" or false alarm
  • Type II error (β\beta): failing to reject H0H_0 when it's actually false—a "false negative" or missed detection
  • Consequences depend on context: in medical testing, Type I might mean unnecessary treatment; Type II might mean missing a disease

Power of a Test

  • Power = 1β1 - \beta = probability of correctly rejecting a false null hypothesis
  • Increases with: larger sample size, larger effect size, higher α\alpha, lower variability
  • Power analysis helps determine needed sample size before collecting data—aim for power ≥ 0.80 typically

Trade-offs in Test Design

  • Lowering α\alpha (being stricter) reduces Type I error but increases Type II error and decreases power
  • Increasing sample size is the only way to reduce both error types simultaneously
  • One-tailed tests have more power than two-tailed tests for detecting effects in the specified direction

Compare: Type I vs. Type II errors—Type I is rejecting truth (false positive, probability = α\alpha), Type II is missing falsehood (false negative, probability = β\beta). A classic FRQ setup: describe consequences of each error type in a given context, then explain which is more serious and how you'd adjust α\alpha accordingly.


Quick Reference Table

ConceptBest Examples
Estimating parametersConfidence intervals for proportions, means, differences, slopes
Comparing proportionsOne-proportion z-test, two-proportion z-test, chi-square tests
Comparing meansOne-sample t-test, two-sample t-test, paired t-test
Categorical relationshipsChi-square independence, chi-square homogeneity, chi-square goodness-of-fit
Quantitative relationshipsT-test for slope, confidence interval for slope, correlation
Decision errorsType I error, Type II error, power
Conditions for inferenceRandom sampling, independence (10% condition), normality/large counts
Key formulasStandard error, test statistic, margin of error, degrees of freedom

Self-Check Questions

  1. What conditions must you verify before constructing a confidence interval for a population proportion, and why does each condition matter?

  2. Compare and contrast chi-square tests for independence and homogeneity: How do they differ in data collection, hypotheses, and interpretation, despite using identical calculations?

  3. A researcher obtains a p-value of 0.03. Explain what this means in the context of hypothesis testing, and identify one common misinterpretation students should avoid.

  4. Which factors increase the power of a hypothesis test? If you wanted to reduce both Type I and Type II error rates simultaneously, what would you need to change?

  5. When would you use a paired t-test instead of a two-sample t-test? Describe a scenario where choosing the wrong test would lead to incorrect conclusions, and explain why.