🧩Discrete Mathematics Unit 12 – Discrete Probability
Discrete probability lays the foundation for understanding uncertainty in various fields. It introduces key concepts like sample spaces, events, and probability axioms, providing tools to quantify and analyze random phenomena.
This unit covers essential topics such as counting techniques, conditional probability, and independence. It also explores discrete probability distributions, their properties, and applications in real-world problem-solving, preparing students for advanced statistical analysis and decision-making.
Probability measures the likelihood of an event occurring and is expressed as a value between 0 and 1
0 indicates an impossible event, while 1 represents a certain event
Sample space (S) consists of all possible outcomes of an experiment or random process
An event (E) is a subset of the sample space containing one or more outcomes
Events can be simple (single outcome) or compound (multiple outcomes)
Mutually exclusive events cannot occur simultaneously in a single trial
Collectively exhaustive events cover all possible outcomes in the sample space
Complementary events (E and E′) are mutually exclusive and collectively exhaustive, satisfying P(E)+P(E′)=1
Probability mass function (PMF) assigns probabilities to each possible outcome in a discrete probability distribution
Sample Spaces and Events
Defining the sample space is crucial for calculating probabilities and analyzing events
Example: Rolling a fair six-sided die has a sample space S={1,2,3,4,5,6}
Events are described using set notation and can be represented using Venn diagrams
Example: The event of rolling an even number is E={2,4,6}
The union of two events (A∪B) contains all outcomes that belong to either event A, event B, or both
The intersection of two events (A∩B) contains outcomes that belong to both event A and event B
The complement of an event (E′) consists of all outcomes in the sample space that are not in event E
The empty set (∅) represents an impossible event with no outcomes
Probability of an event (P(E)) is the sum of probabilities of all outcomes in that event
Probability Axioms and Rules
Axiom 1: Non-negativity - The probability of any event E is non-negative, i.e., P(E)≥0
Axiom 2: Normalization - The probability of the entire sample space S is equal to 1, i.e., P(S)=1
Axiom 3: Additivity - For any two mutually exclusive events A and B, P(A∪B)=P(A)+P(B)
Multiplication rule: For any two events A and B, P(A∩B)=P(A)×P(B∣A), where P(B∣A) is the conditional probability of B given A
Addition rule: For any two events A and B, P(A∪B)=P(A)+P(B)−P(A∩B)
If A and B are mutually exclusive, P(A∩B)=0, simplifying the addition rule to P(A∪B)=P(A)+P(B)
Complement rule: For any event E, P(E′)=1−P(E), where E′ is the complement of E
Counting Techniques for Probability
Fundamental counting principle (multiplication rule) states that if an experiment consists of n independent stages with m1,m2,...,mn possible outcomes respectively, the total number of possible outcomes is m1×m2×...×mn
Example: The number of possible outcomes when rolling two fair dice is 6×6=36
Permutations count the number of ways to arrange n distinct objects in a specific order
The formula for permutations is P(n,r)=(n−r)!n!, where n is the total number of objects and r is the number of objects being arranged
Combinations count the number of ways to select r objects from a set of n distinct objects, disregarding the order
The formula for combinations is C(n,r)=(rn)=r!(n−r)!n!
Permutations with repetition count the number of ways to arrange n objects, allowing repetition, in r positions
The formula for permutations with repetition is nr
Combinations with repetition count the number of ways to select r objects from a set of n distinct objects, allowing repetition and disregarding the order
The formula for combinations with repetition is (rn+r−1)
Conditional Probability
Conditional probability P(A∣B) measures the probability of event A occurring given that event B has already occurred
The formula for conditional probability is P(A∣B)=P(B)P(A∩B), where P(B)=0
Bayes' theorem relates conditional probabilities and can be used to update probabilities based on new information
Bayes' theorem states that P(A∣B)=P(B)P(B∣A)×P(A), where P(B)=0
The law of total probability states that for a partition of the sample space {B1,B2,...,Bn}, P(A)=P(A∣B1)P(B1)+P(A∣B2)P(B2)+...+P(A∣Bn)P(Bn)
This law is useful when the probability of an event A is unknown, but conditional probabilities given a partition of the sample space are known
The multiplication rule for conditional probability states that P(A∩B)=P(A∣B)×P(B)=P(B∣A)×P(A)
Conditional probability is used in various applications, such as medical diagnosis, machine learning, and decision-making under uncertainty
Independence and Dependence
Two events A and B are independent if the occurrence of one event does not affect the probability of the other event
Mathematically, A and B are independent if P(A∩B)=P(A)×P(B) or equivalently, P(A∣B)=P(A) and P(B∣A)=P(B)
If events A and B are independent, then their complements A′ and B′ are also independent
Pairwise independence does not imply mutual independence for three or more events
Example: Consider rolling a fair die. Let A be the event of rolling an even number, B be the event of rolling a number less than 4, and C be the event of rolling a 1. A, B, and C are pairwise independent but not mutually independent
Conditional independence: Events A and B are conditionally independent given event C if P(A∩B∣C)=P(A∣C)×P(B∣C)
Independence is a crucial concept in probability theory and is used in various applications, such as random variable generation, hypothesis testing, and Bayesian networks
Discrete Probability Distributions
A discrete probability distribution assigns probabilities to each possible outcome in a discrete sample space
The probability mass function (PMF) p(x) gives the probability of a discrete random variable X taking on a specific value x
Properties of a PMF: 0≤p(x)≤1 for all x, and ∑xp(x)=1
The cumulative distribution function (CDF) F(x) gives the probability that a random variable X takes a value less than or equal to x
F(x)=P(X≤x)=∑t≤xp(t)
Common discrete probability distributions include:
Bernoulli distribution: Models a single trial with two possible outcomes (success or failure)
Binomial distribution: Models the number of successes in a fixed number of independent Bernoulli trials
Geometric distribution: Models the number of trials needed to achieve the first success in a series of independent Bernoulli trials
Poisson distribution: Models the number of events occurring in a fixed interval of time or space, given a known average rate of occurrence
The expected value (mean) of a discrete random variable X is E(X)=∑xx×p(x)
The variance of a discrete random variable X is Var(X)=E(X2)−[E(X)]2
Applications and Problem Solving
Probability theory has numerous real-world applications in various fields, such as:
When solving probability problems, it is essential to:
Clearly define the sample space and events of interest
Identify the appropriate probability rules, axioms, or distributions to apply
Use counting techniques (permutations, combinations) when necessary
Employ conditional probability and Bayes' theorem when dealing with dependent events or updating probabilities based on new information
Example: In a group of 10 people, 4 have brown eyes, and 6 have blue eyes. If 3 people are randomly selected from the group, what is the probability that exactly 2 of them have blue eyes?
Solution: Use the hypergeometric distribution, which models the number of successes in a fixed number of draws from a population without replacement
The probability is P(X=2)=(310)(26)×(14)≈0.5
Simulation techniques, such as Monte Carlo methods, can be used to estimate probabilities in complex scenarios or when analytical solutions are difficult to obtain
Probability theory provides a foundation for statistical inference, hypothesis testing, and decision-making under uncertainty