Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Independence is one of the most powerful ideas in probability. It lets you break complex problems into manageable pieces. When events are independent, you can multiply probabilities directly, turning intimidating multi-step problems into straightforward calculations. Independence shows up everywhere from basic probability questions to binomial distributions, hypothesis testing, and experimental design.
The core idea: independence is about information. If knowing one outcome tells you nothing new about another, those events are independent. Don't just memorize the formula . Understand why it works and when it applies.
Independence captures a simple but precise idea: one event provides no information about another. This isn't about events being unrelated in everyday language. It's a specific mathematical relationship.
The conditional probability formula is . For independent events, this simplifies to , because you can substitute for in the numerator, and the cancels out.
This gives you an alternative way to check independence: if conditioning on B changes the probability of A, the events are dependent.
Compare: The multiplication rule vs. the conditional definition . Both express the same concept, but the conditional version is often more intuitive for checking independence. If a problem gives you conditional probabilities, use the second form.
This is the #1 conceptual trap on probability exams. Students constantly confuse these two ideas, but they're close to opposites.
Compare: Flipping heads vs. flipping tails on one coin (mutually exclusive) vs. flipping heads on two different coins (independent). The first pair can't both happen. The second pair gives no information about each other. If a question asks whether mutually exclusive events are independent, the answer is NO (assuming both events have nonzero probability).
Independence scales up naturally, which is what makes it so useful for modeling repeated experiments.
Independent trials are repeated experiments where each outcome doesn't influence the others. This is the foundation of binomial probability.
A Bernoulli trial is a single experiment with exactly two outcomes (success/failure). A Bernoulli process is a sequence of independent Bernoulli trials, each with the same probability of success . The binomial distribution counts the number of successes across such trials:
This formula only works because the trials are independent. If outcomes influenced each other, you couldn't simply multiply the probabilities of individual successes and failures.
Compare: A single Bernoulli trial vs. a Bernoulli process. One is a single yes/no experiment; the other is a sequence of independent repetitions. The binomial distribution counts successes across the process, which only works because trials are independent.
When you move from events to random variables, independence takes on a more powerful form that's essential for statistical modeling.
Sometimes you need to check whether independence holds in real data. Contingency tables let you do this.
Compare: Theoretical independence (assumed in a model) vs. tested independence (verified with data). The first is a modeling choice you state as an assumption. The second is a statistical conclusion you reach through a test. Problems on inference often require you to state independence as an assumption before proceeding.
Independence isn't just a calculation tool. It's a fundamental assumption underlying most of statistical inference.
Compare: Coin flips (truly independent by physics) vs. stock prices (often assumed independent but actually correlated). Real-world applications require checking whether the independence assumption is reasonable.
| Concept | Key Formula or Fact |
|---|---|
| Independence definition | |
| Conditional form | if independent |
| Mutually exclusive | (NOT independent if both have positive probability) |
| Multiple independent events | Multiply all individual probabilities |
| Independent random variables | Joint = product of marginals |
| Variance of sum (independent) | |
| Chi-square test | Tests association in contingency tables |
| Bernoulli process | Independent trials with constant |
If , , and A and B are independent, what is ? What if they were mutually exclusive instead?
Two events have and . Are A and B independent? Explain using the definition.
Why can two events with nonzero probabilities be independent but NOT mutually exclusive, and mutually exclusive but NOT independent?
A binomial distribution requires independent trials. If you're sampling without replacement from a small population, why might this assumption be violated? What rule of thumb makes it approximately okay?
A problem asks you to justify using the formula . What condition must you state, and why does it matter?