The Regression Equation
A regression equation finds the best-fitting straight line through a set of data points, letting you describe and predict how one variable responds to changes in another. In this section, you'll learn how to calculate that line, interpret what its parts mean, and evaluate how well it actually fits your data.

Least-Squares Regression Line Calculation
The goal of least-squares regression is to find the line that minimizes the sum of squared residuals. A residual is the vertical distance between an actual data point and the predicted value on the line: . By squaring these distances and minimizing their total, we ensure the line sits as close to all the points as possible, and we prevent positive and negative errors from canceling each other out.
The regression equation takes the form:
- is the predicted value of y for a given x
- is the slope (the change in y for each one-unit increase in x)
- is the y-intercept (the predicted value of y when x = 0)
How to calculate the slope and intercept:
- Find the means and of your x and y data.
- Calculate the slope using:
The numerator captures how x and y move together (their co-variation), and the denominator captures how spread out the x-values are.
- Calculate the y-intercept by plugging the slope and the means into:
This formula guarantees that the regression line always passes through the point , which is a useful fact to remember.

Interpretation of Slope and Y-Intercept
Knowing the numbers isn't enough; you need to explain what they mean in context.
Slope (): For each one-unit increase in x, the predicted value of y changes by units.
- A positive slope means y tends to increase as x increases (direct relationship).
- A negative slope means y tends to decrease as x increases (inverse relationship).
- Always include units. For example, if x is years of experience and y is salary in dollars, a slope of 2,400 means: "For each additional year of experience, predicted salary increases by $2,400."
Y-intercept (): This is the predicted value of y when x = 0.
- Sometimes this makes sense: if x is hours studied and y is exam score, is the predicted score with zero hours of studying.
- Often it doesn't make sense. If x is height in inches and y is weight, then would predict the weight of a person with zero height. In cases like this, the y-intercept is just a mathematical anchor for the line, not a meaningful prediction. Recognizing this distinction is important on exams.

Strength of Linear Relationships
Correlation coefficient (r) measures both the strength and direction of a linear relationship between two variables.
- ranges from to
- : perfect positive linear relationship (all points fall exactly on an increasing line)
- : perfect negative linear relationship (all points fall exactly on a decreasing line)
- : no linear relationship (points show no linear pattern)
- The closer is to 1, the more tightly the points cluster around the line.
The formula for r is:
Notice the numerator is the same as in the slope formula. The denominator standardizes it so that r is always between and .
Coefficient of determination () tells you the proportion of the variation in y that is explained by the linear relationship with x.
- ranges from 0 to 1 and is typically reported as a percentage.
- If , then , meaning 64% of the variation in y is explained by x. The remaining 36% is due to other factors or randomness.
- An close to 1 means the model captures most of the variability; close to 0 means it captures very little.
A scatter plot is always your first step for visualizing the relationship. It helps you confirm that the relationship is actually linear before you trust r or the regression equation.
Assessing Model Fit and Reliability
Calculating a regression line doesn't guarantee it's a good model. You need to check whether the line is trustworthy.
Residual analysis is the primary diagnostic tool:
-
Plot the residuals () against the predicted values or against x.
-
Look for randomness. A good model produces residuals that scatter randomly around zero with no visible pattern.
-
Watch for non-linear patterns (curves in the residual plot), which suggest a straight line isn't the right model.
-
Check for homoscedasticity, meaning the spread of residuals stays roughly constant across all x-values. If the residuals fan out (get wider) as x increases, the model's predictions are less reliable for larger x-values.
Outliers and influential points deserve special attention. A single unusual point, especially one with an extreme x-value, can pull the entire regression line toward it. Always check whether removing a suspicious point substantially changes the slope or intercept.
Standard error of the estimate () measures the average size of the residuals. Think of it as the typical amount by which actual y-values deviate from the predicted values. A smaller means predictions are more precise.
Confidence intervals for the slope and intercept give you a range of plausible values rather than a single estimate. A confidence interval for that does not contain zero provides evidence that there is a real linear relationship between x and y.