Statistical inference is the heart of AP Statistics—it's where you move from describing data to making claims about entire populations based on samples. Every confidence interval you construct and every hypothesis test you run connects back to the fundamental question: How confident can we be that our sample tells us something true about the world? You'll be tested on your ability to choose the right inference procedure, verify conditions, interpret results in context, and understand the probabilistic reasoning behind your conclusions.
The methods in this guide aren't isolated techniques to memorize separately. They form an interconnected framework built on sampling distributions, standard error, and probability. Whether you're estimating a proportion, comparing two means, or testing for independence in a two-way table, you're applying the same core logic: quantify uncertainty, check conditions, and draw conclusions. Don't just memorize formulas—know what concept each method illustrates and when to apply it.
Estimation: Confidence Intervals
Confidence intervals answer the question "What's a reasonable range for the true parameter?" They combine a point estimate with a margin of error to capture uncertainty. The key insight: we're not saying the parameter is definitely in the interval—we're saying our method produces intervals that capture the true parameter a certain percentage of the time.
Confidence Intervals for Proportions
One-sample z-interval uses p^±z∗np^(1−p^)—the standard error shrinks as sample size increases
Success-failure condition requires np^≥10 and n(1−p^)≥10 to ensure the sampling distribution is approximately normal
Interpretation must reference the method: "We are 95% confident that the true population proportion is between..." never says the parameter "probably" falls in the interval
Confidence Intervals for Means
T-intervals replace z* with t* because we estimate the population standard deviation with s—this adds uncertainty reflected in wider intervals
Degrees of freedom (df=n−1 for one sample) determine which t-distribution to use; smaller df means heavier tails and wider intervals
Robustness to non-normality increases with sample size due to the Central Limit Theorem, but always check for strong skewness or outliers with small samples
Confidence Intervals for Differences
Two-proportion z-interval uses SE=n1p^1(1−p^1)+n2p^2(1−p^2)—note you add variances, not standard errors
If zero is in the interval, you cannot conclude a significant difference exists between the populations
Direction matters: interpret which group is larger based on how you defined the difference (p^1−p^2)
Compare: Confidence intervals for proportions vs. means—both use point estimate ± margin of error, but proportions use z* (known sampling distribution shape) while means use t* (estimated variability). On FRQs, always specify which procedure you're using and why.
Decision-Making: Hypothesis Testing Framework
Hypothesis testing formalizes the question "Is this result surprising enough to reject chance?" You assume the null hypothesis is true, calculate how unlikely your observed data would be, and make a decision. The logic is indirect: you're not proving the alternative—you're assessing whether the null is plausible.
Null and Alternative Hypotheses
Null hypothesis (H0) represents "no effect" or "no difference"—it's the claim you're testing against, always stated with equality (=, ≤, or ≥)
Alternative hypothesis (Ha) is what you're trying to find evidence for—can be one-sided (< or >) or two-sided (=)
You never "accept" H0—you either reject it or fail to reject it; absence of evidence isn't evidence of absence
P-Values
Definition: the probability of observing results as extreme or more extreme than the sample data, assuming H0 is true
Small p-values (typically <0.05) indicate the observed data would be unusual under H0, providing evidence against it
P-value is NOT the probability that H0 is true—this is a common misconception that will cost you points
Significance Level (α)
Pre-set threshold (usually 0.05 or 0.01) determines when you reject H0—if p-value ≤α, reject
Statistical significance ≠ practical significance—a tiny, meaningless difference can be "significant" with large enough samples
Compare: P-value vs. significance level—p-value is calculated from your data, while α is chosen before collecting data. Think of α as your threshold and p-value as your evidence. FRQs often ask you to explain what a p-value means in context—never say it's the probability the null is true.
Comparing Groups: Tests for Means
When comparing numerical outcomes across groups, you're testing whether observed differences reflect real population differences or just sampling variability. The test statistic measures how many standard errors your observed difference is from the null hypothesis value.
One-Sample T-Test
Tests whether a population mean equals a hypothesized value using t=s/nxˉ−μ0
Conditions: random sample, independence (10% condition), and normality (check with graphs for small samples)
Degrees of freedom = n−1; use t-distribution to find p-value
Two-Sample T-Test
Compares means from two independent groups—the standard error combines variability from both samples
Don't pool variances unless specifically told to assume equal population variances (AP Stats typically uses unpooled)
Degrees of freedom calculation is complex; use calculator output or the conservative df=min(n1−1,n2−1)
Paired T-Test
Used when observations are naturally paired (before/after, matched subjects)—analyze the differences as a single sample
Reduces variability by controlling for subject-to-subject differences; often more powerful than two-sample test
Conditions apply to the differences, not the original measurements—check that differences are approximately normal
Compare: Two-sample vs. paired t-test—both compare two groups, but paired tests use the same subjects measured twice (or matched pairs), while two-sample tests use independent groups. Choosing the wrong test is a common FRQ error; always identify whether data are paired or independent.
Categorical Analysis: Chi-Square Tests
Chi-square tests assess whether observed categorical data match expected patterns. The test statistic χ2=∑E(O−E)2 measures total squared deviation from expected counts, standardized by expected counts.
Chi-Square Goodness-of-Fit
Tests whether a single categorical variable follows a hypothesized distribution—comparing observed counts to expected counts
Expected counts come from the hypothesized proportions multiplied by sample size
Degrees of freedom = number of categories − 1
Chi-Square Test for Independence
Tests whether two categorical variables are associated in a single population sampled randomly
Expected counts calculated as grand totalrow total×column total for each cell
Null hypothesis: the variables are independent (no association); alternative: variables are associated
Chi-Square Test for Homogeneity
Tests whether the distribution of a categorical variable is the same across different populations
Data collection differs from independence: samples are taken separately from each population
Same calculation as independence test, but different context and hypotheses
Compare: Independence vs. homogeneity—both use identical calculations and the same χ2 formula, but independence tests one sample for association between variables, while homogeneity tests multiple populations for identical distributions. The FRQ will signal which one by describing how data were collected.
Relationships: Regression Inference
Regression inference extends correlation and line-fitting to make claims about population relationships. You're testing whether the true slope β differs from zero—if it does, there's a linear relationship between variables in the population.
T-Test for Slope
Tests H0:β=0 (no linear relationship) using t=SEbb−0 where b is the sample slope
Conditions: linear relationship (check residual plot), independent observations, normal residuals, equal variance (constant spread in residual plot)
Degrees of freedom = n−2 for simple linear regression
Confidence Interval for Slope
Estimates the true population slope with b±t∗⋅SEb
If the interval contains zero, you cannot conclude a significant linear relationship exists
Interpretation: "We are 95% confident that for each one-unit increase in x, y changes by between [lower] and [upper] units on average"
Correlation Coefficient
Pearson's r measures strength and direction of linear association; r2 gives proportion of variance explained
r is unitless and ranges from −1 to +1; outliers can dramatically affect its value
Correlation ≠ causation—even strong correlations don't prove one variable causes changes in another
Compare: Correlation (r) vs. slope (b)—both measure linear relationships, but r is standardized (unitless, between −1 and +1) while b has units and tells you the actual rate of change. You can have a strong correlation with a small slope or vice versa. FRQs may ask you to interpret both.
Understanding Errors and Power
Every hypothesis test risks making mistakes. Understanding error types and power helps you interpret results appropriately and design better studies. The key trade-off: reducing one type of error typically increases the other, unless you increase sample size.
Type I and Type II Errors
Type I error (α): rejecting H0 when it's actually true—a "false positive" or false alarm
Type II error (β): failing to reject H0 when it's actually false—a "false negative" or missed detection
Consequences depend on context: in medical testing, Type I might mean unnecessary treatment; Type II might mean missing a disease
Power of a Test
Power = 1−β = probability of correctly rejecting a false null hypothesis
Power analysis helps determine needed sample size before collecting data—aim for power ≥ 0.80 typically
Trade-offs in Test Design
Lowering α (being stricter) reduces Type I error but increases Type II error and decreases power
Increasing sample size is the only way to reduce both error types simultaneously
One-tailed tests have more power than two-tailed tests for detecting effects in the specified direction
Compare: Type I vs. Type II errors—Type I is rejecting truth (false positive, probability = α), Type II is missing falsehood (false negative, probability = β). A classic FRQ setup: describe consequences of each error type in a given context, then explain which is more serious and how you'd adjust α accordingly.
Quick Reference Table
Concept
Best Examples
Estimating parameters
Confidence intervals for proportions, means, differences, slopes
T-test for slope, confidence interval for slope, correlation
Decision errors
Type I error, Type II error, power
Conditions for inference
Random sampling, independence (10% condition), normality/large counts
Key formulas
Standard error, test statistic, margin of error, degrees of freedom
Self-Check Questions
What conditions must you verify before constructing a confidence interval for a population proportion, and why does each condition matter?
Compare and contrast chi-square tests for independence and homogeneity: How do they differ in data collection, hypotheses, and interpretation, despite using identical calculations?
A researcher obtains a p-value of 0.03. Explain what this means in the context of hypothesis testing, and identify one common misinterpretation students should avoid.
Which factors increase the power of a hypothesis test? If you wanted to reduce both Type I and Type II error rates simultaneously, what would you need to change?
When would you use a paired t-test instead of a two-sample t-test? Describe a scenario where choosing the wrong test would lead to incorrect conclusions, and explain why.