๐ŸŽฒIntro to Probability

Basic Probability Rules

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Probability rules form the backbone of statistical reasoning, and you'll see them everywhere on your exam. Whether you're calculating the likelihood of combined events, updating predictions with new information, or working backward from outcomes to causes, these rules give you the mathematical toolkit to handle uncertainty. The concepts here connect directly to hypothesis testing, expected value calculations, and statistical inference, so mastering them now will serve you throughout the course.

The thing to understand is that probability rules aren't random formulas to memorize. They're logical tools designed for specific situations. You're being tested on your ability to recognize which rule applies and why it works in a given scenario. Know what type of relationship between events each rule addresses and when to reach for it.


Combining Events: The Addition Rules

When you need to find the probability that at least one of several events occurs, you're looking at a union of events. The key question is whether those events can happen simultaneously. That determines which formula you need.

Addition Rule for Mutually Exclusive Events

Mutually exclusive events cannot occur together. If one happens, the other is impossible (like rolling a 3 or a 5 on a single die).

  • Formula: P(Aย orย B)=P(A)+P(B)P(A \text{ or } B) = P(A) + P(B)
  • There's no overlap, so there's no double-counting to worry about.
  • Recognition cue: Look for phrases like "cannot both occur" or scenarios where outcomes are physically distinct.

Addition Rule for Non-Mutually Exclusive Events

When A and B can both occur, adding their individual probabilities counts the overlap twice. You need to subtract it back out.

  • Formula: P(Aย orย B)=P(A)+P(B)โˆ’P(Aย andย B)P(A \text{ or } B) = P(A) + P(B) - P(A \text{ and } B)
  • Classic example: Drawing a red card or a face card from a standard deck. There are 6 red face cards (jack, queen, king of hearts and diamonds), and those get counted once in "red cards" and again in "face cards." Subtracting P(redย andย face)=652P(\text{red and face}) = \frac{6}{52} corrects for this.

Compare: Mutually exclusive vs. non-mutually exclusive addition. Both find P(Aย orย B)P(A \text{ or } B), but only overlapping events need the โˆ’P(Aย andย B)-P(A \text{ and } B) correction. If a problem gives you a Venn diagram with overlap, that's your signal to subtract.


Joint Occurrence: The Multiplication Rules

When you need the probability that both events occur, you're calculating an intersection. The critical distinction is whether knowing one event occurred changes the probability of the other.

Multiplication Rule for Independent Events

Independent events don't influence each other. The outcome of one provides no information about the other.

  • Formula: P(Aย andย B)=P(A)ร—P(B)P(A \text{ and } B) = P(A) \times P(B)
  • Test for independence: Check whether P(BโˆฃA)=P(B)P(B|A) = P(B). If that's true, the events are independent. Equivalently, you can check whether P(Aย andย B)=P(A)ร—P(B)P(A \text{ and } B) = P(A) \times P(B). For example, if P(A)=0.4P(A) = 0.4, P(B)=0.3P(B) = 0.3, and P(Aย andย B)=0.12P(A \text{ and } B) = 0.12, then 0.4ร—0.3=0.120.4 \times 0.3 = 0.12, confirming independence.

Multiplication Rule for Dependent Events

Dependent events require conditional probability. The first outcome changes the probability landscape for the second.

  • Formula: P(Aย andย B)=P(A)ร—P(BโˆฃA)P(A \text{ and } B) = P(A) \times P(B|A), where P(BโˆฃA)P(B|A) is the probability of B given that A has occurred.
  • Classic scenario: Drawing two cards without replacement. If you draw an ace first, the deck now has 51 cards and only 3 aces remaining, so the probability of a second ace changes from 452\frac{4}{52} to 351\frac{3}{51}.

Compare: Independent vs. dependent multiplication. Both find P(Aย andย B)P(A \text{ and } B), but dependent events use P(BโˆฃA)P(B|A) instead of P(B)P(B). The phrase "without replacement" almost always signals dependence.


Working with Conditions: Conditional Probability and Bayes' Theorem

These rules handle situations where you have partial information. Conditional probability asks "what's the probability given what I already know?" while Bayes' theorem lets you reverse the conditioning direction.

Conditional Probability

This restricts the sample space. You're only considering outcomes where the given event has already occurred.

  • Formula: P(AโˆฃB)=P(Aย andย B)P(B)P(A|B) = \frac{P(A \text{ and } B)}{P(B)}
  • Think of it as zooming in on the "B happened" universe and finding A's proportion within it. If 60% of students pass the final, and 40% of all students both studied and passed, then P(studiedโˆฃpassed)=0.400.60=0.67P(\text{studied}|\text{passed}) = \frac{0.40}{0.60} = 0.67.

Bayes' Theorem

Bayes' theorem reverses conditional probability. It lets you find P(AโˆฃB)P(A|B) when you know P(BโˆฃA)P(B|A) instead.

  • Formula: P(AโˆฃB)=P(BโˆฃA)ร—P(A)P(B)P(A|B) = \frac{P(B|A) \times P(A)}{P(B)}
  • This updates your prior belief P(A)P(A) with new evidence to get a posterior probability.
  • Common applications: Medical testing, spam filters, any scenario asking "given this result, what caused it?"

Law of Total Probability

This rule breaks a complex probability into cases by partitioning the sample space into mutually exclusive scenarios.

  • Formula: P(A)=โˆ‘P(AโˆฃBi)ร—P(Bi)P(A) = \sum P(A|B_i) \times P(B_i), where the BiB_i events cover all possibilities without overlap.
  • Often paired with Bayes: Use this to calculate the denominator P(B)P(B) when it's not given directly. For instance, if you need P(positiveย test)P(\text{positive test}) and you know the test's accuracy rates for both diseased and healthy people, you'd sum across both groups.

Compare: Conditional probability vs. Bayes' theorem. Conditional probability is the definition (P(AโˆฃB)P(A|B) from joint and marginal probabilities), while Bayes' flips the conditioning when you know the "wrong direction." Problems love giving you P(positiveโˆฃdisease)P(\text{positive}|\text{disease}) and asking for P(diseaseโˆฃpositive)P(\text{disease}|\text{positive}).


Simplifying Strategies: Complements and Set Operations

Sometimes the easiest path to an answer is indirect. The complement rule and set notation give you alternative approaches that can dramatically simplify calculations.

Complement Rule

The complement of event A is everything that isn't A. This is especially powerful for "at least one" problems.

  • Formula: P(Aโ€ฒ)=1โˆ’P(A)P(A') = 1 - P(A), or equivalently, P(A)=1โˆ’P(Aโ€ฒ)P(A) = 1 - P(A')
  • Why it helps: For P(atย leastย oneย successย inย nย trials)P(\text{at least one success in } n \text{ trials}), you'd normally need to add up the cases of exactly 1, exactly 2, exactly 3 successes, etc. Instead, calculate 1โˆ’P(allย failures)1 - P(\text{all failures}), which is a single computation.

Probability of Union of Events

Union means "or." At least one of the events occurs.

  • Formula: P(AโˆชB)=P(A)+P(B)โˆ’P(AโˆฉB)P(A \cup B) = P(A) + P(B) - P(A \cap B)
  • This is the general addition rule written in set notation. For three or more events, you'll need more inclusion-exclusion corrections (add back triple overlaps, etc.).

Probability of Intersection of Events

Intersection means "and." Both events occur simultaneously.

  • Formula: P(AโˆฉB)=P(A)ร—P(BโˆฃA)P(A \cap B) = P(A) \times P(B|A) for dependent events, or P(A)ร—P(B)P(A) \times P(B) for independent events.
  • Venn diagram connection: The intersection is the overlapping region where both circles meet.

Compare: Union vs. intersection. Union (โˆช\cup) asks "at least one," intersection (โˆฉ\cap) asks "both." Confusing these is a common error. Union is always at least as large as either individual probability, while intersection is always at most as large.


Quick Reference Table

ConceptBest Examples
Finding P(Aย orย B)P(A \text{ or } B)Addition Rule (mutually exclusive), General Addition Rule (overlapping)
Finding P(Aย andย B)P(A \text{ and } B)Multiplication Rule (independent), Conditional Multiplication (dependent)
Events don't affect each otherIndependent Multiplication Rule
Events do affect each otherDependent Multiplication Rule, Conditional Probability
Reversing conditional directionBayes' Theorem
Breaking into casesLaw of Total Probability
"At least one" shortcutsComplement Rule
Set notation translationsUnion = or, Intersection = and, Complement = not

Self-Check Questions

  1. You're told P(A)=0.4P(A) = 0.4, P(B)=0.3P(B) = 0.3, and P(Aย andย B)=0.12P(A \text{ and } B) = 0.12. Are A and B independent? Which multiplication rule applies, and how do you know?

  2. Compare the addition rule for mutually exclusive events with the general addition rule. When does the simpler version work, and what error would you make using it incorrectly on overlapping events?

  3. A medical test has a 95% detection rate for a disease that affects 1% of the population. If someone tests positive, what rule would you use to find the probability they actually have the disease? What other information do you need?

  4. You want to find the probability of getting at least one head in five coin flips. Explain why the complement rule is more efficient than direct calculation, and set up the solution.

  5. Given a two-way table showing gender and major preferences, how would you calculate P(femaleโˆฃSTEMย major)P(\text{female}|\text{STEM major}) versus P(STEMย majorโˆฃfemale)P(\text{STEM major}|\text{female})? What's the relationship between these two values?