Normal Distribution Calculations and Interpretation
Normal distributions let you calculate the probability of observing values within specific ranges. By converting raw data to z-scores, you can use a single standardized table (or calculator function) to find probabilities for any normal distribution, regardless of its original mean and standard deviation.

Probability Calculations in Normal Distributions
Z-scores measure how many standard deviations a value sits from the mean. They're the bridge between your raw data and the standard normal table.
- = the observed value
- = the population mean
- = the population standard deviation
A z-score of 1.5 means the value is 1.5 standard deviations above the mean. A z-score of -2 means it's 2 standard deviations below.
Standardizing a normal distribution means converting it from to , the standard normal distribution. Once standardized, every problem uses the same table or calculator function.
Finding probabilities with the standard normal table:
The table gives you the cumulative probability to the left of a z-score. That's the proportion of data at or below that value.
- "Less than" problems: Look up the z-score directly. The table value is your answer.
- "Greater than" problems: Subtract the table value from 1.
- "Between two values" problems: Find the cumulative probability for each boundary, then subtract.
Example: Suppose exam scores are normally distributed with and . What's the probability a student scores between 70 and 90?
- Find the z-score for 70:
- Find the z-score for 90:
- Look up both in the table: ,
- Subtract:
About 66.9% of students score between 70 and 90.

Interpretation of Normal Distribution Graphs
Normal distributions have a few defining properties worth memorizing:
- The curve is symmetric and bell-shaped, centered on the mean.
- The mean, median, and mode are all equal, sitting at the center.
- The Empirical Rule (68-95-99.7 Rule) describes how data clusters:
- ~68% of data falls within of the mean
- ~95% falls within
- ~99.7% falls within
- The total area under the curve equals 1 (representing 100% probability).
The Empirical Rule is useful for quick estimates. If someone asks "is a value of 98 unusual?" and the distribution has and , you can note that 98 is 3 standard deviations above the mean, placing it in the outer 0.3% of the distribution. That's quite rare.
When reading a normal curve graph, the shaded area between any two points represents the probability of a randomly selected value falling in that range. A wider shaded region means a higher probability; a narrow region in the tails means a lower one.

Technology for Normal Distribution Analysis
Graphing calculators and statistical software handle these calculations without needing a z-table. The two main functions you'll use:
Normal CDF (cumulative distribution function):
- Enter the lower bound and upper bound of your range.
- Enter the mean () and standard deviation ().
- The output is the probability of a value falling within that range.
For "less than" problems, use (or a very large negative number) as the lower bound. For "greater than" problems, use as the upper bound.
Inverse Normal (quantile function):
This works in reverse: you provide a probability and get back the corresponding value.
- Enter the cumulative probability (area to the left).
- Enter the mean and standard deviation.
- The output is the value in the original distribution that corresponds to that percentile.
Example: What score marks the 90th percentile if and ? Use inverse normal with area = 0.90, , . The result is approximately 84.2.
Theoretical Foundations and Applications
The Central Limit Theorem explains why normal distributions appear so often: when you average many independent random variables, the result tends toward a normal distribution regardless of the original shape. This is why heights, test scores, measurement errors, and many biological traits approximate a bell curve.
Normal distributions are foundational for statistical inference. Confidence intervals and hypothesis tests rely heavily on normal (and related) distributions. The skills you're building here with z-scores and probability calculations carry directly into those later topics.