Rolling dice is one of the most straightforward ways to see probability distributions in action. When you roll three standard dice, you get a discrete random variable (the sum) that can take on a limited set of values, each with a calculable probability. This experiment ties together the core ideas of Unit 4: sample spaces, probability distributions, expected value, and variance.
Probability Distributions for Rolling Three Dice
Probabilities for multiple dice rolls
Each die has six faces numbered 1 through 6. Because the dice are independent of each other, the total number of outcomes when rolling three dice is:
Every individual outcome (like rolling a 2, 5, and 3) is equally likely, with a probability of .
To find the probability of a specific sum, you count how many of those 216 outcomes produce that sum, then divide by 216.
- Sum of 18 (6+6+6): Only 1 way to get this, so
- Sum of 4 (1+1+2, 1+2+1, 2+1+1): There are 3 ways, so
Notice that order matters here. Rolling 1-2-1 and 2-1-1 count as different outcomes even though the same numbers appear. This is why counting carefully (using combinatorics) is important for finding the number of ways to reach each sum.

Probability distributions of dice sums
Define a random variable as the sum of the three dice. The possible values of range from 3 (1+1+1) to 18 (6+6+6).
To build the full probability distribution:
- List every possible value of from 3 to 18.
- For each value, count the number of outcomes that produce that sum.
- Divide each count by 216 to get the probability.
For example, the sum of 7 can be achieved 15 different ways (1+1+5, 1+2+4, 1+3+3, 1+4+2, 1+5+1, 2+1+4, 2+2+3, 2+3+2, 2+4+1, 3+1+3, 3+2+2, 3+3+1, 4+1+2, 4+2+1, 5+1+1), so .
Two quick checks to confirm your distribution is valid:
- Every probability is between 0 and 1.
- All the probabilities sum to exactly 1.
When you visualize this distribution as a histogram (possible sums on the x-axis, probabilities on the y-axis), you'll see a symmetric, bell-shaped curve centered at 10.5. Sums near the middle (like 10 and 11) have the highest probabilities, while extreme sums (3 and 18) are very rare. The distribution is symmetric because sums on opposite ends have equal probabilities: , , and so on.
.svg.png)
Discrete random variables in dice experiments
A discrete random variable takes on a countable number of distinct values. The sum of three dice qualifies because it can only be an integer from 3 to 18.
Expected Value (Mean)
The expected value tells you the long-run average sum if you rolled three dice many times. Calculate it with:
You multiply each possible sum by its probability, then add all those products together:
This makes intuitive sense: the expected value of a single die is 3.5, and .
Variance and Standard Deviation
These measure how spread out the sums are around the expected value.
-
Calculate by multiplying each value of squared by its probability and summing.
-
Find the variance:
-
Take the square root for the standard deviation:
For three dice, and . A larger standard deviation would mean the sums are more spread out from 10.5; a smaller one would mean they cluster more tightly.
Theoretical foundations and applications
The law of large numbers connects directly to this experiment: if you roll three dice thousands of times and track the average sum, that average will get closer and closer to 10.5 as you increase the number of rolls. Any single trial might give you a 5 or a 16, but the long-run average is predictable.
This dice experiment also gives you a preview of the central limit theorem. Even though a single die has a flat (uniform) distribution, the sum of three dice already looks approximately normal. As you add more dice, the distribution of the sum becomes even more bell-shaped. You'll explore this idea more formally later in the course.
These principles extend well beyond dice. Any time you're working with repeated independent trials and looking at sums or averages, the same logic applies.