Just as we had with other units regarding inference, there was always the prediction inference method and the test inference method. Since sections 9.3 and 9.4 dealt with the prediction method (confidence intervals), we will not tackle the testing methods by testing claims about our population.
Recall that we'll model our slopes using a t-distribution. Likewise, previous units illustrated that a t-test is a statistical test that is commonly used to determine whether there is a significant difference between the mean of a sample and a hypothesized value. In the context of a regression model, a t-test for the slope can be used to test the statistical significance of the slope, which represents the relationship between the independent variable (also known as the predictor variable) and the dependent variable.
In general, If the t-statistic for the slope is significantly different from zero, it suggests that there is a meaningful relationship between the two variables and that the slope is not equal to zero. On the other hand, if the t-statistic is not significantly different from zero, it suggests that there is not a strong relationship between the variables and that the slope is likely equal to zero.
Hypotheses
The first thing we need to make sure is clear before we perform our test is to set up our null and alternate hypotheses. Since we are performing hypothesis tests on the slope of a regression model, our null and alternate hypothesis will look as according:
- H0: β = β0
- Ha: β =, <, or > β0
(where β0 is the hypothesized value from the null hypothesis)
For example, an Easter candy researcher may claim that the correlation between the number of jelly beans consumed per day and the amount of Easter grass cluttering the house has a slope of 40 ("As the jelly bean consumption increases by 1, the number of easter grass pieces is predicted to increase by 40"). 🐰
If this were the test, you would test it using these hypotheses:
- Ho: β = 40
- Ha: β ≠ 40
Often times with hypothesis test for slopes, we are not testing a null hypothesis, but merely testing that things are correlated. Therefore, our 0 = 0 in that case and we are testing against the fact that our null slope is 0.

Conditions
Just like our other hypothesis tests, we have conditions for the inference that must be met. For hypothesis tests for slope, here are the four necessary conditions:
- Residual plot does not appear to show a pattern for the relationship between x and y
- Standard deviation for y does not vary with x. (Check for no “fanning” on residual plot)
- Independence
- Random sample or random experiment
- 10% condition
- For any particular value of x, the responses for y are normally distributed
- Sample size is at least 30
- Sample data is free of any skewness or outliers All of these things must be stated explicitly before proceeding to calculate the actual test!
What Test Do I Run?
The test you will run in this instance is a Linear Regression T Test for Slopes. In most graphing calculators, this is known as LinRegTTest under the Stats>Tests menu.
Since we are dealing with quantitative data and it is unlikely we know the population standard deviation of y, we must use a t distribution for our critical value.
Now that we have our test set up…
Let’s go! You have now verified the conditions to be met, wrote your hypotheses and identified the correct test, so we can calculate now!
Vocabulary
The following words are mentioned explicitly in the College Board Course and Exam Description for this topic.
| Term | Definition |
|---|---|
| alternative hypothesis | The claim that contradicts the null hypothesis, representing what the researcher is trying to find evidence for. |
| independence | The condition that observations in a sample are not influenced by each other, typically ensured through random sampling or randomized experiments. |
| linear relationship | A relationship between two variables that can be described by a straight line. |
| normal distribution | A probability distribution that is mound-shaped and symmetric, characterized by a population mean (μ) and population standard deviation (σ). |
| null hypothesis | The initial claim or assumption being tested in a hypothesis test, typically stating that there is no effect or no difference. |
| outlier | Data points that are unusually small or large relative to the rest of the data. |
| random sample | A sample selected from a population in such a way that every member has an equal chance of being chosen, reducing bias and allowing for valid statistical inference. |
| randomized experiment | A study design where subjects are randomly assigned to treatment groups to establish cause-and-effect relationships. |
| regression model | A statistical model that describes the relationship between a response variable (y) and one or more explanatory variables (x). |
| residual | The difference between the actual observed value and the predicted value in a regression model, calculated as residual = y - ŷ. |
| sampling without replacement | A sampling method in which an item selected from a population cannot be selected again in subsequent draws. |
| significance test | A statistical procedure used to determine whether there is sufficient evidence to reject the null hypothesis based on sample data. |
| skewness | A measure of the asymmetry of a distribution, indicating whether data is concentrated more on one side of the center. |
| slope | The value b in the regression equation ŷ = a + bx, representing the rate of change in the predicted response for each unit increase in the explanatory variable. |
| slope of a regression model | The coefficient that represents the rate of change in the predicted response variable for each unit increase in the explanatory variable in a linear regression equation. |
| standard deviation | A measure of how spread out data values are from the mean, represented by σ in the context of a population. |
| t-test for a slope | A hypothesis test used to determine whether the slope of a regression model is significantly different from zero, assessing whether there is a statistically significant linear relationship between variables. |
Frequently Asked Questions
How do I set up a t-test for the slope of a regression line?
Pick the t-test for a slope. Steps (concise): 1. Hypotheses: H0: β = β0 (often β0 = 0). Ha: β < β0, β > β0, or β ≠ β0 (use context). 2. Test statistic: t = (b − β0) / SE_b, where b is the sample slope and SE_b is its standard error from the regression output. Degrees of freedom = n − 2. Use the t-distribution to get a p-value. 3. Conditions (verify before trusting p-value): linear relationship, constant σy (homoscedasticity), independence (random sample or randomized experiment; n ≤ 10% N if sampling without replacement), and approximate normality of residuals (or n > 30 if skewed). Check residual plots for these (CED VAR-7.L). 4. Decision: compare p-value to α. If p ≤ α reject H0; interpret result in context (evidence for/against slope ≠ β0). For an AP-aligned walkthrough, see the Topic 9.4 study guide (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG). For extra practice, try problems at (https://library.fiveable.me/practice/ap-statistics).
What's the difference between testing a slope and testing a mean?
Testing a slope and testing a mean are similar in logic (both use t-tests) but differ in what you’re testing and how you check conditions. For a mean you test H₀: μ = μ₀ using t = (x̄ − μ₀)/(s/√n) with df = n−1 and you check random sampling, independence (10% rule), and near-normality of the data (or n>30). For a regression slope you test H₀: β = β₀ (usually β₀ = 0) with t = (b − β₀)/SE(b), where SE(b) comes from residual variability and df = n−2. Regression inference requires extra checks: linearity between x and y, constant variance (homoscedasticity), independence (random sample or experiment + 10% rule), and normality of residuals (or larger n). You inspect residual plots to verify those conditions. For AP Stats alignment see Topic 9.4 (t-test for slope, hypotheses H₀: β=β₀, Hₐ: β≠/>β₀) and the conditions in the CED. Review the Fiveable study guide for Topic 9.4 (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG) and practice problems (https://library.fiveable.me/practice/ap-statistics).
I'm confused about the null hypothesis for slope - is it always H₀: β = 0?
Short answer: no—H₀ is H₀: β = β₀, and most often β₀ = 0, but it doesn’t have to be. Explanation: AP’s CED (Topic 9.4) says the t-test for a slope has null H₀: β = β₀ and alternative Hₐ: β < β₀, > β₀, or ≠ β₀. We usually set β₀ = 0 when we’re asking “is there a linear association (nonzero slope) between x and y?”—that’s the common test on the exam. But if a problem gives a specific hypothesized slope (say β₀ = 2), you test against that value instead. Use the t-test for slope with df = n − 2, check the regression conditions (linearity, constant variance, independence, normal residuals) before making inferences (CED VAR-7.L). For the AP-aligned study guide see (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG). For more review and practice, check the Unit 9 overview (https://library.fiveable.me/ap-statistics/unit-9) and practice problems (https://library.fiveable.me/practice/ap-statistics).
When do I use a t-test for slope vs other regression tests?
Use a t-test for the slope whenever you want to make an inference about the population slope β in a simple linear regression (that’s the AP CED rule: VAR-7.J.1). That means your null is H0: β = β0 (often β0 = 0) and Ha is one-sided or two-sided (VAR-7.K.1). Compute t = (b − β0)/SE(b) and compare to a t distribution with df = n − 2. Before you use it, verify the CED conditions (VAR-7.L.1): linear relationship (residual plot), constant σy (no funnel in residuals), independence (random sample or experiment and the 10% rule if sampling without replacement), and approximate normality of residuals (or n > 30 if skewed). If these fail, don’t trust the t-test—consider transforming, using a different model, or a nonparametric approach. For the AP review, see the Topic 9.4 study guide (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG) and more unit resources (https://library.fiveable.me/ap-statistics/unit-9). For practice, try the AP problem sets (https://library.fiveable.me/practice/ap-statistics).
What are the conditions I need to check before doing a slope test?
Before you run the t-test for a regression slope (H₀: β = β₀ vs Hₐ: β ≠/> β₀), check these CED conditions: - Linearity: the true relationship between x and y is linear. Use a residual plot (no pattern). - Constant variance (homoscedasticity): residuals have roughly equal spread for all x (no funnel shape). - Independence: data come from a random sample or randomized experiment; if sampling without replacement, ensure n ≤ 10% of the population. - Normality of responses at each x: residuals are approximately normal. If residuals look skewed, you want n > 30; if n < 30, you need no strong skewness or outliers. - Check for outliers/influential points (they can distort b and SE(b)). These let you use the t-distribution for the slope test and the SE for b. For a quick CED-aligned refresher, see the Topic 9.4 study guide (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG). For broader unit review and practice questions, visit the Unit 9 page (https://library.fiveable.me/ap-statistics/unit-9) and the AP practice bank (https://library.fiveable.me/practice/ap-statistics).
How do I check if the linear condition is met using residuals?
To check the linear condition for a t-test on the slope, look at the residuals (observed y − predicted y): - Make a residual plot (residuals vs. x). For a linear relationship you want a random scatter around 0 with no clear pattern (no curve, no funnel, no clusters). A systematic pattern (U-shape, curve) means the true relationship isn’t linear. - Also check constant variance (homoscedasticity) from that same plot: the vertical spread of residuals should be roughly the same for all x. A fan/funnel shape means variance changes with x. - Check residual normality (required for inference about the slope): use a histogram or normal probability plot of residuals. Small samples (n < 30) need residuals free of strong skewness/outliers; larger samples are more forgiving. - Independence isn’t checked with residuals—verify random sampling/experiment and the 10% rule. These checks follow the CED VAR-7.L conditions. For a short study guide on Topic 9.4 see (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG). For extra practice, try problems at (https://library.fiveable.me/practice/ap-statistics).
Can someone explain step by step how to verify all the conditions for a slope test?
Step-by-step check for a t-test on the slope (β): 1. Independence—data come from a random sample or randomized experiment. If sampling without replacement, confirm n ≤ 10% of the population. If not met, you can’t trust the test. 2. Linearity—plot y vs. x and look at the residual plot (residuals vs. x). Residuals should scatter randomly around 0 (no curve). If you see a pattern, the true relationship isn’t linear. 3. Constant variance (homoscedasticity)—in the residual plot residual spread should be roughly the same for all x. Fan shape or increasing spread → violates assumption. 4. Normality of responses for a given x—check residual histogram or normal Q-Q plot. If distribution is roughly normal you’re good. If skewed, require n > 30; if n < 30, you need no strong skewness or outliers. 5. Outliers/influential points—check residuals and leverage (look for large standardized residuals or high leverage). One influential point can spoil the slope inference. If all hold, use t-test for slope (H0: β = β0; Ha: β <, >, or ≠ β0) with df = n − 2. For walkthroughs and examples see the Topic 9.4 study guide (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG) and more practice at the Unit 9 page (https://library.fiveable.me/ap-statistics/unit-9) or practice problems (https://library.fiveable.me/practice/ap-statistics).
What's the formula for the test statistic when testing a slope?
The test statistic for a t-test on the slope is t = (b − β0) / SE_b where b is the sample slope, β0 is the hypothesized slope in H0 (often 0), and SE_b is the standard error of b. Using the AP formula sheet, SE_b = s / (s_x · sqrt(n − 1)), where s = sqrt[Σ(y − ŷ)²/(n − 2)] and s_x is the sample SD of the x-values. The test follows a t-distribution with df = n − 2. Remember the hypotheses: H0: β = β0 and Ha: β < β0, β > β0, or β ≠ β0. Before you use the t-test, check linearity, equal spread (homoscedasticity), independence (random sample / 10% rule), and near-normal residuals (see Topic 9.4 CED). For more on setup and conditions, see the Topic 9.4 study guide (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG). For practice, try problems at (https://library.fiveable.me/practice/ap-statistics).
I don't understand what β₀ means in the null hypothesis H₀: β = β₀
β is the true (population) slope in the regression model. In the null H₀: β = β₀, β₀ is just whatever value you’re claiming about that true slope—usually 0 when you’re testing “no linear association.” So H₀: β = 0 says “there’s no linear relationship between x and y in the population.” On the AP exam you’ll most often do a t-test for the slope (VAR-7.J / VAR-7.K). The test statistic is t = (b − β₀) / SE_b with df = n − 2, and the alternatives can be β < β₀, β > β₀, or β ≠ β₀ depending on your question. Don’t forget to check the regression conditions (linearity, constant variance, independence, normality of residuals) before making inference (VAR-7.L). If you want a quick refresher tied to Topic 9.4, see the Fiveable study guide (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG) and try practice problems at (https://library.fiveable.me/practice/ap-statistics).
How do I know if I should use a one-tailed or two-tailed test for slope?
Decide by the research question—your alternative hypothesis determines the tail. - If you just ask “Is there a relationship?” or “Is the slope different from 0?” use a two-tailed test: Ha: β ≠ 0. - If you have a directional claim (e.g., higher x leads to higher y), use a one-tailed test in that direction: Ha: β > 0 (or Ha: β < 0 if you expect a decrease). Remember AP requirements: the test for slope is a t-test (CED VAR-7.J), null H0: β = β0 (usually 0), alternative Ha: β < β0, > β0, or ≠ β0 (VAR-7.K). Check conditions before inference (linearity, constant variance, independence, normal residuals; df = n − 2). One-tailed tests give more power in the specified direction but don’t let data-driven direction switching inflate Type I error. For a quick refresher on setting up hypotheses and conditions, see the Topic 9.4 study guide (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG). For more practice problems, try Fiveable’s practice page (https://library.fiveable.me/practice/ap-statistics).
What does it mean that the standard deviation doesn't vary with x and how do I check this?
That phrase means homoscedasticity: the variability (σy) of the y values is roughly the same for every x. For the t-test for a slope (VAR-7.L.1.b), you need roughly equal spread so your standard error for the slope is valid. How to check it: - Make a residual plot (residuals vs. x). If the points show a roughly constant vertical scatter around 0 for all x, variance doesn’t change. If you see a “funnel” or cone shape (spread increasing or decreasing with x), the variance changes (heteroscedasticity). - You can also split x into bins, compute the SD of residuals in each bin, and compare them—they should be similar. - Optional: use a scale-location (spread) plot or statistical tests in software, but visual residual plots are what AP expects. For practice and more examples on checking conditions, see the Topic 9.4 study guide (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG) and the unit overview (https://library.fiveable.me/ap-statistics/unit-9).
If my residual plot looks curved, can I still do the t-test for slope?
Short answer: no—not reliably. A curved pattern in the residual plot means the linearity condition is violated, so the t-test for the slope (which assumes the true relationship is linear) isn’t appropriate (CED VAR-7.L.1a). What to do instead: don’t run the t-test yet. Try a transformation (log, square root), add a quadratic or higher-term to the model, or fit a different model type and then re-check residuals. If you transform or change the model and residuals look random with roughly constant spread, then you can do inference on that valid model. Also verify homoscedasticity, independence (random sample / 10% rule), and normality of residuals—if n ≤ 30 watch for strong skewness/outliers (CED VAR-7.L.1b–d). For AP review, the CED expects you to use residual analysis to check linearity before doing the t-test (see the Topic 9.4 study guide on Fiveable: (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG) and the Unit 9 overview (https://library.fiveable.me/ap-statistics/unit-9)). For extra practice, try problems at (https://library.fiveable.me/practice/ap-statistics).
When do I need n > 30 for a slope test and when is it okay to have less?
You always use a t-test for the slope, but the sample-size rule of thumb (n > 30) is only about the normality condition for the residuals. If residuals are roughly normal (no strong skewness or outliers) you can do the t-test with n < 30. If the residuals are noticeably skewed, then the CED says you should have n > 30 so the t-distribution approximation is safer. So practically: check the conditions first—linearity, constant variance (homoscedasticity), independence (random sample or 10% rule), and that residuals are approximately normal. If residual plots and a histogram/QQ plot of residuals look okay, n can be under 30. If residuals are skewed or have outliers, aim for n > 30. For AP alignment, this is exactly VAR-7.L in the CED. Review the Topic 9.4 study guide for examples (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG) and try practice problems (https://library.fiveable.me/practice/ap-statistics) to get comfortable checking residuals.
How do I check the independence condition for regression data?
Independence is mostly about how the data were collected—it isn’t something you check with a residual plot. For the t-test for a slope you need either (a) a random sample or (b) a randomized experiment (CED VAR-7.L.1.c.i). If you sampled without replacement, also verify n ≤ 10% of the population (CED VAR-7.L.1.c.ii). Quick practical checks: was the data gathered by simple random sampling, random assignment, or a designed experiment? If yes, you can usually assume independence. If it’s a time series or repeated measures (observations close in time or clustered), independence may fail—look for autocorrelation or dependence in the study design. If you’re unsure, mention the limitation when you report results. For more on setting up the slope test and the full list of conditions, see the Topic 9.4 study guide (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG). Practice problems are at (https://library.fiveable.me/practice/ap-statistics).
I missed class - what's the difference between testing if there's a relationship vs testing if the slope equals a specific value?
Short answer: they're the same procedure but different hypotheses. "Is there a relationship?" usually means test H0: β = 0 vs Ha: β ≠ 0 (or one-sided), i.e. does the population slope equal zero (no linear relationship). Testing "slope = a specific value" is the general form H0: β = β0 vs Ha: β ≠ β0 (or <, >)—β0 can be 0 or any number you want to test. Mechanics (AP CED): you always use the t-test for a slope: t = (b − β0) / SE_b with df = n − 2, compare to t-distribution. Check the four inference conditions (linearity, equal spread/homoscedasticity, independence/random sampling or 10% rule, normality of residuals) before trusting p-values. On the exam you’ll present hypotheses in terms of β, report t, df, p-value, and condition checks (see the Topic 9.4 study guide (https://library.fiveable.me/ap-statistics/unit-9/setting-up-test-for-slope-regression-model/study-guide/KEnR8FNAnXWsr8dFSAKG) and Unit 9 overview (https://library.fiveable.me/ap-statistics/unit-9)). For extra practice, try problems at (https://library.fiveable.me/practice/ap-statistics).
