Binomial Distribution
The binomial distribution lets you calculate the probability of getting a specific number of successes across a fixed set of trials, where each trial has only two outcomes. It shows up constantly in statistics, from quality control (how many defective items in a batch?) to medicine (how many patients respond to a treatment?). Understanding it well also sets you up for later topics like confidence intervals and hypothesis testing.
Characteristics of Binomial Experiments
For an experiment to count as binomial, it must meet all four of these conditions:
-
Fixed number of trials (). You decide in advance how many trials you'll run. For example, flipping a coin 10 times means .
-
Two outcomes per trial. Each trial results in either a "success" (with probability ) or a "failure" (with probability ). These labels are just conventions; "success" doesn't have to be a good thing. If you're counting defective products, a defect is your "success."
-
Constant probability. The probability of success stays the same from trial to trial. Every coin flip has the same chance of heads.
-
Independent trials. The outcome of one trial doesn't affect any other. Flipping heads on trial 3 has no influence on trial 4. If you're sampling from a population, this means sampling with replacement, or from a population large enough that removing one item barely changes the probabilities.
A binomial distribution is a type of discrete probability distribution because the number of successes can only be a whole number (0, 1, 2, ... up to ).

Bernoulli Trials vs. Binomial Experiments
A Bernoulli trial is a single trial with exactly two outcomes: success (probability ) or failure (probability ). One coin flip is a Bernoulli trial. One roll of a die where you check "did I get a 6?" is a Bernoulli trial.
A binomial experiment is just a fixed number () of independent Bernoulli trials, all with the same success probability. So flipping a coin 5 times is a binomial experiment made up of 5 Bernoulli trials. The total number of successes across those trials is a discrete random variable that follows a binomial distribution.
Think of it this way: a Bernoulli trial is the building block, and a binomial experiment stacks of those blocks together.

The Binomial Probability Formula
To find the probability of getting exactly successes in trials:
- is the binomial coefficient, which counts the number of ways to arrange successes among trials. It's calculated as
- accounts for the probability of successes
- accounts for the probability of the remaining failures
Example: What's the probability of getting exactly 3 heads in 5 coin flips?
Here , , and :
So there's about a 31.25% chance of getting exactly 3 heads in 5 flips.
Mean and Variance of Binomial Distributions
These formulas are much simpler than the general formulas for discrete random variables because the binomial structure does the heavy lifting.
- Mean:
- Variance:
- Standard deviation:
where .
Example: You roll a fair die 20 times and count how many times you roll a 6. Here and .
- Mean: (you'd expect about 3.33 sixes)
- Variance:
- Standard deviation:
The mean tells you the expected number of successes, and the standard deviation tells you how much the actual count will typically vary from that expected value.
Additional Concepts
- Cumulative distribution function (CDF): Gives the probability of obtaining up to a certain number of successes, i.e., . This is useful when you need "at most" or "fewer than" probabilities. You calculate it by summing individual binomial probabilities from 0 to .
- Law of large numbers: As you increase the number of trials, the observed proportion of successes gets closer and closer to the true probability . Flip a coin 10 times and you might get 70% heads. Flip it 10,000 times and you'll almost certainly be very close to 50%.