Statistical inference is the heart of AP Statistics. It's where you move from describing data to making claims about entire populations based on samples. Every confidence interval you construct and every hypothesis test you run connects back to one fundamental question: How confident can we be that our sample tells us something true about the world?
These methods aren't isolated techniques to memorize separately. They form an interconnected framework built on sampling distributions, standard error, and probability. Whether you're estimating a proportion, comparing two means, or testing for independence in a two-way table, you're applying the same core logic: quantify uncertainty, check conditions, draw conclusions. Know what concept each method illustrates and when to apply it.
Estimation: Confidence Intervals
Confidence intervals answer the question "What's a reasonable range for the true parameter?" They combine a point estimate with a margin of error to capture uncertainty. The critical idea: you're not saying the parameter is definitely in the interval. You're saying your method produces intervals that capture the true parameter a certain percentage of the time.
Confidence Intervals for Proportions
One-sample z-interval uses p^โยฑzโnp^โ(1โp^โ)โโ. Notice the standard error shrinks as n increases, which is why larger samples give narrower intervals.
Success-failure condition requires np^โโฅ10 and n(1โp^โ)โฅ10. This ensures the sampling distribution of p^โ is approximately normal so the z-based method is valid.
Interpretation must reference the method: "We are 95% confident that the true population proportion is between..." Never say the parameter "probably" falls in the interval. The parameter is fixed; it's the interval that varies from sample to sample.
Confidence Intervals for Means
T-intervals replace zโ with tโ because you're estimating the population standard deviation with s. This extra uncertainty produces wider intervals than z-intervals would.
Degrees of freedom (df=nโ1 for one sample) determine which t-distribution to use. Smaller df means heavier tails and wider intervals. As df grows large, the t-distribution approaches the standard normal.
Robustness to non-normality increases with sample size due to the Central Limit Theorem. For small samples (roughly n<30), check for strong skewness or outliers using a dotplot, histogram, or normal probability plot.
Confidence Intervals for Differences
Two-proportion z-interval uses SE=n1โp^โ1โ(1โp^โ1โ)โ+n2โp^โ2โ(1โp^โ2โ)โโ. You add the variances from each sample, not the standard errors. This is because the variance of a difference of independent random variables equals the sum of their variances.
If zero is in the interval, you cannot conclude a significant difference exists between the populations.
Direction matters: interpret which group is larger based on how you defined the difference (p^โ1โโp^โ2โ). A negative interval means p^โ2โ may be larger.
Compare: Confidence intervals for proportions vs. means both use point estimate ยฑ margin of error, but proportions use zโ (the sampling distribution shape is known) while means use tโ (variability is estimated from the sample). On FRQs, always name which procedure you're using and why.
Decision-Making: Hypothesis Testing Framework
Hypothesis testing formalizes the question "Is this result surprising enough to reject chance?" You assume the null hypothesis is true, calculate how unlikely your observed data would be under that assumption, and make a decision. The logic is indirect: you're not proving the alternative. You're assessing whether the null is plausible.
Null and Alternative Hypotheses
Null hypothesis (H0โ) represents "no effect" or "no difference." It's the claim you're testing against, stated with equality (e.g., p=0.5, ฮผ1โโฮผ2โ=0).
Alternative hypothesis (Haโ) is what you're trying to find evidence for. It can be one-sided (< or >) or two-sided (๎ =). The research question determines which form to use, and you must choose before looking at the data.
You never "accept" H0โ. You either reject it or fail to reject it. Failing to reject means the data didn't provide enough evidence against H0โ, not that H0โ is true.
P-Values
Definition: the probability of observing results as extreme as or more extreme than the sample data, assuming H0โ is true.
Small p-values (typically below 0.05) indicate the observed data would be unusual under H0โ, providing evidence against it.
P-value is NOT the probability that H0โ is true. This is the single most common misconception on the AP exam. The p-value is computed assumingH0โ is true, so it can't simultaneously tell you the probability that H0โ is true.
Significance Level (ฮฑ)
Pre-set threshold (usually 0.05 or 0.01) that determines when you reject H0โ. If p-value โคฮฑ, reject.
Choosing ฮฑ involves weighing consequences: lower ฮฑ reduces false positives (Type I errors) but increases false negatives (Type II errors).
Statistical significance โ practical significance. A tiny, meaningless difference can be "statistically significant" with a large enough sample. Always consider whether the effect size actually matters in context.
Compare: P-value vs. significance level: the p-value is calculated from your data, while ฮฑ is chosen before collecting data. Think of ฮฑ as your threshold and the p-value as your evidence. FRQs often ask you to explain what a p-value means in context. Never say it's the probability the null is true.
Comparing Groups: Tests for Means
When comparing numerical outcomes across groups, you're testing whether observed differences reflect real population differences or just sampling variability. The test statistic measures how many standard errors your observed difference is from the null hypothesis value.
One-Sample T-Test
Tests whether a population mean equals a hypothesized value using t=s/nโxหโฮผ0โโ
Conditions (check all three):
Random: data come from a random sample or randomized experiment
Independence: observations are independent; if sampling without replacement, verify nโค10% of the population (the 10% condition)
Normality: the population is approximately normal, or n is large enough for the CLT. For small samples, check graphs for strong skewness or outliers.
Degrees of freedom = nโ1; use the t-distribution to find the p-value
Two-Sample T-Test
Compares means from two independent groups. The standard error combines variability from both samples: SE=n1โs12โโ+n2โs22โโโ
Don't pool variances unless specifically told to assume equal population variances. AP Statistics uses the unpooled (Welch's) procedure by default.
Degrees of freedom calculation is complex (Welch-Satterthwaite formula). Use calculator output, or for a conservative approach, use df=min(n1โโ1,n2โโ1).
Paired T-Test
Used when observations are naturally paired (before/after measurements, matched subjects, two measurements on the same individual). You compute the differences and analyze them as a single sample.
Reduces variability by controlling for subject-to-subject differences, which often makes the test more powerful than a two-sample test on the same data.
Conditions apply to the differences, not the original measurements. Check that the differences are approximately normal (or that n is large enough).
Compare: Two-sample vs. paired t-test: both compare two groups, but paired tests use the same subjects measured twice (or matched pairs), while two-sample tests use independent groups. Choosing the wrong test is a common FRQ error. Always identify whether the data are paired or independent before selecting your procedure.
Categorical Analysis: Chi-Square Tests
Chi-square tests assess whether observed categorical data match expected patterns. The test statistic ฯ2=โE(OโE)2โ measures total squared deviation from expected counts, standardized by expected counts. Larger values of ฯ2 mean the data deviate more from what you'd expect, and the test is always right-tailed (you only look at the upper tail).
All three chi-square tests share the same conditions: random sample(s), independent observations, and all expected counts โฅ5. That expected count condition is what you check instead of the success-failure condition used for proportions.
Chi-Square Goodness-of-Fit
Tests whether a single categorical variable follows a hypothesized distribution. For example: do the colors in a bag of candy match the company's claimed percentages?
Expected counts = hypothesized proportion ร sample size for each category
Degrees of freedom = number of categories โ 1
Chi-Square Test for Independence
Tests whether two categorical variables are associated in a single population sampled randomly
Expected counts for each cell: grandย totalrowย totalรcolumnย totalโ
H0โ: the variables are independent (no association); Haโ: the variables are associated
Degrees of freedom = (number of rows โ 1) ร (number of columns โ 1)
Chi-Square Test for Homogeneity
Tests whether the distribution of a categorical variable is the same across different populations. For example: do students at three different schools have the same distribution of favorite subjects?
Data collection differs from independence: separate random samples are drawn from each population, rather than one sample classified by two variables.
Same formula and df calculation as the independence test, but the hypotheses and context differ.
Compare: Independence vs. homogeneity use identical calculations and the same ฯ2 formula, but independence tests one sample for association between two variables, while homogeneity tests multiple populations for identical distributions. The FRQ will signal which one by describing how data were collected: one sample with two variables โ independence; separate samples from different populations โ homogeneity.
Relationships: Regression Inference
Regression inference extends correlation and line-fitting to make claims about population relationships. You're testing whether the true slope ฮฒ differs from zero. If it does, there's evidence of a linear relationship between the variables in the population.
T-Test for Slope
Tests H0โ:ฮฒ=0 (no linear relationship) using t=SEbโbโ0โ, where b is the sample slope and SEbโ is its standard error (typically provided in computer output).
Conditions (remember the acronym LINE):
Linear: the true relationship is linear (check the residual plot for no pattern)
Independent: observations are independent of each other
Normal: residuals are approximately normally distributed (check a histogram or normal probability plot of residuals)
Equal variance: the spread of residuals is roughly constant across all values of x (check for a consistent band in the residual plot)
Degrees of freedom = nโ2 for simple linear regression
Confidence Interval for Slope
Estimates the true population slope with bยฑtโโ SEbโ
If the interval contains zero, you cannot conclude a significant linear relationship exists.
Interpretation: "We are 95% confident that for each one-unit increase in x, y changes by between [lower bound] and [upper bound] units, on average."
Correlation Coefficient
Pearson's r measures the strength and direction of linear association. r2 gives the proportion of variance in y explained by the linear relationship with x.
r is unitless and ranges from โ1 to +1. Outliers can dramatically affect its value.
Correlation โ causation. Even strong correlations don't prove one variable causes changes in another. Only a well-designed randomized experiment can establish causation.
Compare: Correlation (r) vs. slope (b): both measure linear relationships, but r is standardized (unitless, between โ1 and +1) while b has units and tells you the actual rate of change. You can have a strong correlation with a small slope or vice versa. FRQs may ask you to interpret both from the same regression output.
Understanding Errors and Power
Every hypothesis test risks making mistakes. Understanding error types and power helps you interpret results and design better studies. The key trade-off: reducing one type of error typically increases the other, unless you increase sample size.
Type I and Type II Errors
Type I error (probability = ฮฑ): rejecting H0โ when it's actually true. This is a "false positive" or false alarm.
Type II error (probability = ฮฒ): failing to reject H0โ when it's actually false. This is a "false negative" or missed detection.
Consequences depend on context: in medical testing, a Type I error might mean giving unnecessary treatment to a healthy patient; a Type II error might mean failing to diagnose a disease. FRQs frequently ask you to describe both errors in context and explain which is more serious.
Power of a Test
Power = 1โฮฒ = the probability of correctly rejecting a false null hypothesis. Higher power means you're more likely to detect a real effect.
Power increases with:
Larger sample size (n)
Larger true effect size (the further the true parameter is from H0โ)
Higher significance level (ฮฑ)
Lower population variability (ฯ)
A common benchmark is power โฅ0.80. Power analysis before data collection helps you determine the sample size needed to detect a meaningful effect.
Trade-offs in Test Design
Lowering ฮฑ (being stricter about rejecting) reduces Type I error but increases Type II error and decreases power.
Increasing sample size is the only way to reduce both error types simultaneously. This is why sample size planning matters so much in study design.
One-sided tests have more power than two-sided tests for detecting effects in the specified direction, but they can't detect effects in the opposite direction.
Compare: Type I vs. Type II errors: Type I is rejecting truth (false positive, probability = ฮฑ), Type II is missing falsehood (false negative, probability = ฮฒ). A classic FRQ setup: describe consequences of each error type in a given scenario, then explain which is more serious and how you'd adjust ฮฑ accordingly.
Quick Reference Table
Concept
Methods
Estimating parameters
Confidence intervals for proportions, means, differences, slopes
T-test for slope, confidence interval for slope, correlation
Decision errors
Type I error, Type II error, power
Conditions for inference
Random sampling, independence (10% condition), normality/large counts
Key formulas
Standard error, test statistic, margin of error, degrees of freedom
Self-Check Questions
What conditions must you verify before constructing a confidence interval for a population proportion, and why does each condition matter?
Compare and contrast chi-square tests for independence and homogeneity: How do they differ in data collection, hypotheses, and interpretation, despite using identical calculations?
A researcher obtains a p-value of 0.03. Explain what this means in the context of hypothesis testing, and identify one common misinterpretation students should avoid.
Which factors increase the power of a hypothesis test? If you wanted to reduce both Type I and Type II error rates simultaneously, what would you need to change?
When would you use a paired t-test instead of a two-sample t-test? Describe a scenario where choosing the wrong test would lead to incorrect conclusions, and explain why.