Continuous Probability Functions
Continuous probability functions model situations where a variable can take on infinitely many values within a range. Unlike discrete random variables (which you can count), continuous random variables like height, time, or temperature require a different approach: instead of assigning probability to individual outcomes, you calculate probability as the area under a curve.

Continuous Probability Functions

Integration for continuous probabilities
A probability density function (PDF), denoted , describes how probability is distributed across the values of a continuous random variable . Variables like time, weight, and temperature are common examples.
The core idea: to find the probability that falls within some range, you integrate the PDF over that range.
For example, if models the weight of a product, then gives the probability that a randomly selected product weighs between 100 and 150 grams.
Every valid PDF must satisfy two properties:
- Non-negativity: for all . A density function can never be negative.
- Total area equals 1: . This guarantees that the probabilities across all possible outcomes sum to certainty.
One more connection worth knowing: integrating the PDF from up to some value gives the cumulative distribution function (CDF), . The CDF tells you the probability that the variable is at most .

Probability-area relationship in distributions
The link between probability and area is the single most important idea in this section. The area under the PDF curve between two x-values is the probability that the variable falls in that range. For instance, the area under a weight distribution curve between 60 and 70 kg equals the probability that a randomly chosen individual weighs between 60 and 70 kg.
Because the total area under the curve is 1, the probability of taking any single exact value is 0. This might feel strange, but think of it this way: there are infinitely many possible values in any interval, so the area above a single point (which has no width) is zero. A practical consequence is that = 0, and . Including or excluding endpoints doesn't change the probability.
The shape of the PDF curve tells you where values are more or less concentrated:
- Higher values of mean the variable is more likely to fall near that region. The peak of a normal distribution, for example, sits at the mean.
- Steeper slopes indicate probability density is changing rapidly, which you typically see in the tails of a distribution where values become increasingly rare.
Note that itself is not a probability. It's a density, and it can actually exceed 1. Only the area under the curve over an interval gives you a probability.
Applications of continuous probability functions
Solving a continuous probability problem generally follows these steps:
-
Identify the distribution. Match the real-world scenario to a known distribution:
- Normal : symmetric, bell-shaped data like heights or exam scores
- Exponential : time between independent events, like wait times at a bus stop or the lifespan of a light bulb
- Continuous uniform : all values equally likely between a minimum and maximum , like a random number generator on an interval
-
Determine the parameters. Use the information given in the problem:
- Normal: mean and standard deviation
- Exponential: rate parameter (the reciprocal of the mean)
- Uniform: endpoints and
-
Set up the integral. Translate the question into bounds of integration. For example, "probability a light bulb lasts between 1000 and 1500 hours" becomes using the appropriate PDF. For cumulative questions like "probability a student scores below the 60th percentile," you'd integrate from the lower bound of the distribution up to that score.
-
Calculate and interpret. Compute the integral (by hand, table, or technology) and state what the result means in context. A result of 0.12 might mean there's a 12% chance a product is defective, or a 12% chance a customer waits more than 5 minutes.
Measures of central tendency and dispersion
Just as discrete distributions have expected values and variances, continuous distributions do too. The formulas use integration instead of summation.
Expected value (mean):
This is the long-run average value of if you could repeat the random process many times. It acts as the "balance point" of the distribution.
Variance:
Variance measures how spread out the distribution is around the mean. A larger variance means values are more dispersed.
Standard deviation is the square root of the variance, , and it's in the same units as , which makes it easier to interpret than variance.
A useful computational shortcut for variance: , where . This form is often faster than expanding inside the integral.