๐Ÿ“ŠAP Statistics

Probability Rules

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Probability rules form the mathematical backbone of everything you'll do in AP Statistics. They show up when you calculate the likelihood of sample outcomes, when you build confidence intervals, and when you run hypothesis tests. Sampling distributions? Those rely on probability rules to predict how sample statistics behave. Type I error? That's a probability calculation too.

The exam tests whether you can choose the right rule for a given scenario, not just whether you've memorized formulas. The key distinction is recognizing the structure of a problem: Are events independent or dependent? Are they mutually exclusive or overlapping? Can you work backward from a result to find a cause? Knowing when each rule applies and why it works is what separates a 3 from a 5.


Combining Probabilities: The Addition Rules

When you need the probability that at least one of two events occurs (A or B), you're in addition rule territory. The key question is whether the events can happen simultaneously.

Addition Rule for Mutually Exclusive Events

  • Use when events cannot occur together. If one happens, the other is impossible (like rolling a 2 or rolling a 5 on a single die).
  • Formula: P(Aย orย B)=P(A)+P(B)P(A \text{ or } B) = P(A) + P(B)
  • There's no overlap to subtract because none exists. Watch for language like "distinct outcomes," "cannot both occur," or scenarios with non-overlapping categories.

Addition Rule for Non-Mutually Exclusive Events

  • Use when events can overlap. Both could happen at the same time (like drawing a red card or drawing a king, since the king of hearts and king of diamonds are both).
  • Formula: P(Aย orย B)=P(A)+P(B)โˆ’P(Aย andย B)P(A \text{ or } B) = P(A) + P(B) - P(A \text{ and } B)
  • You subtract the intersection to avoid double-counting. The most common error on the exam is forgetting to subtract that overlap, which inflates your probability (sometimes even above 1, which should be an immediate red flag).

Compare: Both rules find P(Aย orย B)P(A \text{ or } B). The mutually exclusive version is just a special case where P(Aย andย B)=0P(A \text{ and } B) = 0, making the subtraction unnecessary. On FRQs, always state whether events can overlap before choosing your formula.


Joint Probabilities: The Multiplication Rules

When you need the probability that both events occur (A and B), you're using multiplication. The critical distinction is whether knowing one event changes the probability of the other.

Multiplication Rule for Independent Events

  • Use when one event doesn't affect the other, like flipping a coin twice or selecting with replacement.
  • Formula: P(Aย andย B)=P(A)ร—P(B)P(A \text{ and } B) = P(A) \times P(B)
  • How to verify independence: Check whether P(BโˆฃA)=P(B)P(B|A) = P(B). If that's true, the events are independent. You can also check whether P(Aย andย B)=P(A)ร—P(B)P(A \text{ and } B) = P(A) \times P(B). Both tests are equivalent.

Multiplication Rule for Dependent Events

  • Use when one event changes the probability of another, like drawing cards without replacement.
  • Formula: P(Aย andย B)=P(A)ร—P(BโˆฃA)P(A \text{ and } B) = P(A) \times P(B|A)
  • The conditional probability P(BโˆฃA)P(B|A) accounts for how A's occurrence affects B. The classic scenario is sampling without replacement, where the pool of possible outcomes shrinks after each selection.

Compare: Both rules find P(Aย andย B)P(A \text{ and } B), but dependent events require conditional probability. The 10% condition (n<0.10Nn < 0.10N) lets you treat sampling without replacement as approximately independent when the sample is small relative to the population. For example, drawing 5 cards from a 52-card deck changes probabilities noticeably, but surveying 50 people from a city of 100,000 barely changes them at all.


Conditional Probability and Reversing Direction

These rules handle situations where you have partial information or need to update probabilities based on new evidence. Conditional probability is the foundation for inference.

Conditional Probability

  • Definition: The probability of A occurring given that B has already occurred, written P(AโˆฃB)P(A|B).
  • Formula: P(AโˆฃB)=P(Aย andย B)P(B)P(A|B) = \frac{P(A \text{ and } B)}{P(B)}
  • This formula restricts the sample space to only outcomes where B occurred. Think of it this way: you're finding what fraction of B's outcomes also include A. The denominator changes from "all possible outcomes" to "only outcomes where B happened."

Bayes' Theorem

  • Use when you need to reverse conditional direction. You know P(BโˆฃA)P(B|A) but need P(AโˆฃB)P(A|B).
  • Formula: P(AโˆฃB)=P(BโˆฃA)ร—P(A)P(B)P(A|B) = \frac{P(B|A) \times P(A)}{P(B)}
  • This updates a prior probability with new evidence. Classic applications include diagnostic testing (finding the probability of disease given a positive test), quality control, and any "given this result, what caused it?" question.

Law of Total Probability

  • Use to find P(B)P(B) when B can occur through multiple pathways. It breaks a complex event into mutually exclusive cases.
  • Formula: P(B)=โˆ‘P(BโˆฃAi)ร—P(Ai)P(B) = \sum P(B|A_i) \times P(A_i), where the AiA_i partition the sample space (they're mutually exclusive and cover all possibilities).
  • This is often paired with Bayes' Theorem to calculate the denominator P(B)P(B). For instance, in a disease-testing problem, a positive test result can come from someone who has the disease or someone who doesn't. You calculate each pathway separately and add them.

Compare: Conditional probability calculates P(AโˆฃB)P(A|B) directly from joint and marginal probabilities, while Bayes' Theorem reverses a known conditional. If an FRQ gives you P(positiveย testโˆฃdisease)P(\text{positive test}|\text{disease}) and asks for P(diseaseโˆฃpositiveย test)P(\text{disease}|\text{positive test}), you need Bayes'.


Simplifying Strategies: Complements and Counting

These tools make complex probability calculations manageable by reframing the problem or systematically counting outcomes.

Complement Rule

  • Use when "at least one" or "none" appears. It's often far easier to calculate what you don't want and subtract from 1.
  • Formula: P(Aโ€ฒ)=1โˆ’P(A)P(A') = 1 - P(A), so P(atย leastย one)=1โˆ’P(none)P(\text{at least one}) = 1 - P(\text{none})
  • This works because all probabilities must sum to 1, so the complement captures everything the event doesn't.

Probability of At Least One Event

This is a strategic application of the complement rule for repeated independent trials.

  • Formula: P(atย leastย oneย successย inย nย trials)=1โˆ’P(allย failures)=1โˆ’(1โˆ’p)nP(\text{at least one success in } n \text{ trials}) = 1 - P(\text{all failures}) = 1 - (1-p)^n
  • For example, the probability of getting at least one 6 in four rolls of a die: 1โˆ’(5/6)4โ‰ˆ0.5181 - (5/6)^4 \approx 0.518.
  • Common trap: Trying to add probabilities directly (P(exactlyย 1)+P(exactlyย 2)+โ€ฆP(\text{exactly 1}) + P(\text{exactly 2}) + \ldots) instead of using the complement. The complement method requires just one calculation.

Compare: For "at least one" problems, the complement method requires computing only the probability of zero successes. Direct calculation requires summing every case from 1 success up to nn successes. Always check if the complement is simpler.

Permutations and Combinations in Probability

These counting methods help you find the number of favorable outcomes divided by total outcomes in equally likely sample spaces.

  • Permutations count arrangements where order matters: nPr=n!(nโˆ’r)!_nP_r = \frac{n!}{(n-r)!}
  • Combinations count selections where order doesn't matter: nCr=n!r!(nโˆ’r)!_nC_r = \frac{n!}{r!(n-r)!}

A quick way to decide: if rearranging the same items gives you a different outcome (like rankings or seating arrangements), use permutations. If rearranging doesn't matter (like choosing a committee), use combinations.


Quick Reference Table

ScenarioRule to Use
Events that can't overlapMutually exclusive addition rule
Events that can overlapGeneral addition rule (subtract intersection)
Events that don't affect each otherIndependent multiplication rule
Events where one affects the otherDependent multiplication rule, conditional probability
Reversing conditional directionBayes' Theorem
Breaking down complex eventsLaw of Total Probability
"At least one" problemsComplement rule
Counting equally likely outcomesPermutations (order matters), Combinations (order doesn't)

Self-Check Questions

  1. You're told P(A)=0.4P(A) = 0.4, P(B)=0.3P(B) = 0.3, and P(Aย andย B)=0.12P(A \text{ and } B) = 0.12. Are A and B independent? How do you know, and what rule confirms this?

  2. A medical test has a 95% detection rate for a disease (sensitivity) and a 3% false positive rate. If 1% of the population has the disease, what's the probability someone with a positive test actually has the disease? Which rules do you need?

  3. Compare the addition rule for mutually exclusive events with the general addition rule. Under what condition does the general rule simplify to the mutually exclusive version?

  4. You're drawing 3 cards from a standard deck without replacement. Explain why you must use the dependent multiplication rule, and describe how the 10% condition relates to treating this as approximately independent.

  5. An FRQ asks: "Find the probability of getting at least one head in 5 coin flips." Write out both the complement approach and explain why directly calculating P(1)+P(2)+P(3)+P(4)+P(5)P(1) + P(2) + P(3) + P(4) + P(5) is less efficient.