Counting principles and probability give you the tools to figure out how many outcomes are possible in a situation and how likely any particular outcome is. These ideas show up constantly in Honors Algebra II, and they connect directly to the binomial theorem work you've already done. This section covers permutations, combinations, probability rules, conditional probability, and how the binomial theorem applies to probability problems.
Permutations and Combinations for Counting

Arranging Objects in a Specific Order
A permutation counts the number of ways to arrange objects when order matters. Choosing president, then vice president, then treasurer from a group is a permutation problem because the same three people in a different order produce a different result.
The formula is:
- = total number of objects
- = number of objects being arranged
Example: How many ways can you arrange 5 books on a shelf?
Selecting Subsets of Objects
A combination counts the number of ways to select objects when order does not matter. Choosing 3 committee members from a group of 10 is a combination problem because the same three people in any order form the same committee.
The formula is:
- = total number of objects
- = number of objects being selected
Example: How many ways can you choose 3 toppings from a list of 10?
The quick way to remember the difference: if rearranging your selection would create a different outcome, use permutations. If rearranging changes nothing, use combinations.
Applying the Fundamental Counting Principle
The Fundamental Counting Principle says that if one event can happen in ways and a second independent event can happen in ways, then the two events together can happen in ways. This extends to any number of events.
Example: You have 4 shirts and 3 pairs of pants. The number of possible outfits is .
This principle is the backbone of most counting problems. Permutations and combinations are really just specialized applications of it.
Permutations and Combinations with Repetition
Sometimes objects can be selected more than once.
Permutations with repetition use the formula , where is the number of choices for each position and is the number of positions.
- Example: A 4-digit PIN using digits 0–9 allows repetition. That gives possible codes.
Combinations with repetition (sometimes called "stars and bars" problems) use:
- Example: Selecting 3 scoops from 5 ice cream flavors (repeats allowed): ways.
Probabilities of Events

Probability Basics
Probability measures how likely an event is to occur, on a scale from 0 (impossible) to 1 (certain). The probability of event is written .
The complement of event , written , is the event "not ." Their probabilities always add to 1:
This is surprisingly useful. When a problem asks for the probability that at least one thing happens, it's often easier to calculate the complement (nothing happens) and subtract from 1.
Example: If , then .
Mutually Exclusive and Independent Events
Mutually exclusive events cannot happen at the same time. For mutually exclusive events and :
Example: Rolling a 1 or a 6 on a fair die:
If events are not mutually exclusive (they can overlap), you need the general addition rule: . You subtract the overlap to avoid counting it twice.
Independent events are events where the outcome of one has no effect on the other. For independent events and :
Example: Flipping two fair coins and getting heads on both:
Don't confuse these two concepts. Mutually exclusive is about "or" (can they both happen?). Independence is about "and" (does one affect the other?). In fact, if two events both have nonzero probability, they cannot be both mutually exclusive and independent.
Law of Large Numbers
The Law of Large Numbers states that as you increase the number of trials, the experimental probability gets closer and closer to the theoretical probability.
Example: Flip a fair coin 10 times and you might get 7 heads (70%). Flip it 10,000 times and your heads percentage will be very close to 50%. The coin doesn't "remember" past flips; it's just that random variation averages out over many trials.
Conditional Probability and Independence
Conditional Probability
Conditional probability is the probability of event happening given that event has already happened:
Read as "the probability of given ." The key idea is that knowing occurred changes your sample space.
The multiplication rule rearranges this formula:
Example: From a standard 52-card deck, what's the probability that a card is a king given that it's a heart? There are 13 hearts, and 1 of them is a king, so .

Independence
Two events and are independent if knowing one occurred gives you no information about the other. Formally:
To test whether events are independent, check if . If that equation holds, they're independent. If not, they're dependent.
Example: Rolling a die and flipping a coin are independent. The die result tells you nothing about the coin.
Bayes' Theorem and Tree Diagrams
Bayes' Theorem lets you "reverse" a conditional probability. If you know but need , Bayes' gives you:
A common application: a medical test is 95% accurate, and 1% of the population has a disease. If someone tests positive, what's the probability they actually have the disease? Bayes' Theorem handles this, and the answer is often much lower than students expect because of false positives in the large healthy population.
Tree diagrams are a visual way to organize these problems:
- Draw branches for the first event (e.g., "has disease" vs. "no disease") and label each branch with its probability.
- From each first-event branch, draw branches for the second event (e.g., "tests positive" vs. "tests negative") and label with conditional probabilities.
- Multiply along a path to get the joint probability for that sequence of outcomes.
- Add the joint probabilities of all paths that lead to the outcome you want.
Tree diagrams make Bayes' Theorem problems much more manageable because you can see every possible path laid out.
Binomial Theorem in Probability
Binomial Expansion
The Binomial Theorem expands expressions of the form :
Each term uses a binomial coefficient , which you already know from the combinations formula. Pascal's Triangle gives you these same coefficients.
Example: Expand . Here , , and . The first few terms:
- :
- :
- Continue for to get the full expansion.
Binomial Probability Formula
This is where the binomial theorem connects to probability. If you repeat an experiment times, each trial is independent, and each trial has success probability , the probability of exactly successes is:
- = number of trials
- = number of successes you want
- = probability of success on a single trial
- = probability of failure on a single trial
Example: What's the probability of getting exactly 3 heads in 5 coin flips?
The piece counts how many different arrangements of 3 heads and 2 tails exist. The piece gives the probability of any single such arrangement.
Binomial Distribution Properties
For a binomial distribution with trials and success probability :
- Mean (expected value):
- Standard deviation:
The mean tells you the average number of successes you'd expect. The standard deviation tells you how much variation to expect around that average.
Cumulative probability is the probability of getting at most successes:
You calculate each individual for and add them up.
Example: A factory has a 10% defect rate. In a batch of 10 items, what's the probability of at most 2 defective items?
So there's about a 93% chance of 2 or fewer defective items in the batch.