Probability forms the foundation of statistical analysis, giving you tools to quantify uncertainty and make predictions. This section covers the core terminology you'll use throughout the rest of the course: experiments, outcomes, sample spaces, events, and the different ways probabilities are calculated and interpreted.
Probability Terminology and Concepts

Key terms in probability
Every probability problem starts with an experiment, which is any process or procedure that produces well-defined outcomes. Rolling a die, flipping a coin, and drawing a card from a deck are all experiments.
An outcome is a single result of an experiment. Rolling a 3, getting heads, or drawing the ace of spades are each individual outcomes.
The sample space (denoted ) is the set of all possible outcomes of an experiment.
- For a single die roll:
- For a coin flip:
- For flipping two coins:
An event is any subset of the sample space, meaning it's a collection of one or more outcomes. Events are denoted by capital letters like or .
- Rolling an even number on a die:
- Drawing a face card from a standard deck: of each suit (12 cards total)
Two events are mutually exclusive if they cannot occur at the same time. For example, rolling a 2 and rolling a 5 on a single die roll are mutually exclusive. But rolling an even number and rolling a number greater than 3 are not mutually exclusive, since 4 and 6 satisfy both.

Calculation of probability types
Theoretical probability is what you'd expect to happen based on the known structure of the experiment. You calculate it with:
This formula assumes all outcomes are equally likely. The theoretical probability of rolling a 3 on a fair die is .
Experimental probability is based on what actually happens when you run the experiment. You calculate it with:
If you roll a die 60 times and a 3 appears 12 times, the experimental probability is . Notice that doesn't match the theoretical , and that's normal for a small number of trials.
The Law of Large Numbers explains why this gap closes over time: as the number of trials increases, the experimental probability converges toward the theoretical probability. Roll that die 10,000 times, and the proportion of 3s will be much closer to .

Conditional vs. unconditional probabilities
Unconditional probability is the probability of an event with no extra information given. It considers the entire sample space. Written simply as .
Conditional probability is the probability of an event given that some other event has already occurred. It's written , read as "the probability of A given B." The formula is:
where is the probability that both A and B occur, and .
The key difference: unconditional probability uses the full sample space, while conditional probability restricts the sample space to only the outcomes where the given condition is true. Conditioning on new information updates the likelihood of an event.
Card deck example: The unconditional probability of drawing a heart from a standard deck is:
Now suppose you're told the card drawn is red. The sample space shrinks from 52 cards to 26 (only hearts and diamonds). The conditional probability becomes:
Knowing the card is red doubled the probability it's a heart, because you eliminated all the black cards from consideration.
Random Variables and Probability Distributions
A random variable is a variable whose value is determined by the outcome of a random process. Random variables come in two types:
- Discrete: takes on countable values (number of heads in 10 flips, number of students absent)
- Continuous: takes on any value within a range (height, temperature, time)
A probability distribution describes how likely each possible value of a random variable is.
- For discrete random variables, this is called a probability mass function (PMF). It assigns a probability to each specific value.
- For continuous random variables, this is called a probability density function (PDF). Probabilities are found over intervals rather than at individual points.
Independence is a related concept you'll use constantly. Two events A and B are independent if the occurrence of one doesn't change the probability of the other. Mathematically, for independent events:
You can also check independence using conditional probability: if , then A and B are independent. If knowing B happened doesn't change the probability of A, the two events have no influence on each other.