Random variables and distributions provide the mathematical language for modeling uncertainty in causal inference. They let you formally describe treatment assignments, potential outcomes, and observed data, which are the building blocks for estimating causal effects. This topic covers the types of random variables, their probability distributions and properties, common named distributions, and how to transform random variables.
Types of random variables
A random variable assigns a numerical value to each outcome of a random process. In causal inference, random variables represent things like whether a subject received treatment, what outcome they experienced, or what covariates they have. The type of random variable you're working with determines which mathematical tools you'll use.
Discrete vs continuous
Discrete random variables take on a countable set of values, each with a positive probability. Think of counts: the number of patients who recover, the number of defective items in a batch.
Continuous random variables take on uncountably many values within some interval of real numbers. Think of measurements: blood pressure, income, time until an event.
This distinction matters because discrete variables use sums (and probability mass functions), while continuous variables use integrals (and probability density functions). Choosing the wrong framework leads to incorrect calculations.
Bernoulli random variables
Bernoulli random variables are the simplest type: they model a single binary outcome. The variable equals 1 ("success") with probability and 0 ("failure") with probability .
In causal inference, Bernoulli variables show up constantly. Treatment assignment in a randomized experiment is often Bernoulli: each subject either gets the treatment (1) or doesn't (0).
- Expected value:
- Variance:
Notice the variance is maximized when and shrinks toward zero as approaches 0 or 1.
Binomial random variables
If you repeat a Bernoulli trial independent times and count the total successes, you get a binomial random variable. For example, if 10 patients each independently have a 0.3 probability of recovery, the total number who recover follows a Binomial(10, 0.3) distribution.
- PMF:
- Expected value:
- Variance:
The binomial is just the sum of independent Bernoulli variables, so its mean and variance follow directly from linearity of expectation and independence.
Poisson random variables
Poisson random variables model the count of events occurring in a fixed interval of time or space, when events happen independently at a constant average rate. Examples: the number of hospital admissions per day, or the number of mutations in a stretch of DNA.
The single parameter represents the average rate of events per interval.
- PMF:
- Expected value:
- Variance:
A useful fact: the mean equals the variance. If you see count data where the variance is much larger than the mean, a Poisson model may not be appropriate (this is called overdispersion).
Gaussian random variables
The Gaussian (normal) distribution is the most widely used continuous distribution. It's symmetric and bell-shaped, fully characterized by its mean and standard deviation .
- PDF:
- Expected value:
- Variance:
The central limit theorem explains why the Gaussian is so important: the sum (or average) of many independent random variables tends toward a Gaussian distribution regardless of the original distribution, as long as certain regularity conditions hold. This is why sample means are approximately normal in large samples, which underpins most of the hypothesis testing and confidence interval construction you'll encounter in causal inference.
Probability distributions
Probability distributions formalize how likely each possible value of a random variable is. In causal inference, they model uncertainty in treatment assignments, potential outcomes, and observed data. The three main ways to describe a distribution are through mass functions, density functions, and cumulative distribution functions.
Probability mass functions
A probability mass function (PMF) applies to discrete random variables. It gives the probability that the variable takes each specific value: .
Two requirements for a valid PMF:
- for all
You can read probabilities directly off a PMF. For instance, if is Binomial(3, 0.5), then .
Probability density functions
A probability density function (PDF) applies to continuous random variables. Unlike a PMF, the PDF value at a single point is not a probability. Instead, you integrate the PDF over an interval to get the probability of falling in that interval:
Two requirements for a valid PDF:
- for all
A common mistake is interpreting as a probability. It's a density, so it can actually exceed 1 at some points, as long as the total area under the curve equals 1.
Cumulative distribution functions
The cumulative distribution function (CDF) works for both discrete and continuous variables. It gives the probability that the variable is less than or equal to a given value:
Key properties:
- Non-decreasing: if , then
- and
- For continuous variables, the CDF is a smooth curve; for discrete variables, it's a step function
The CDF is especially handy for computing interval probabilities: .

Joint distributions
Joint distributions describe the simultaneous behavior of two or more random variables. For discrete variables and , the joint PMF is . For continuous variables, the joint PDF is .
Joint distributions are essential in causal inference because you're almost always dealing with multiple variables at once (treatment, outcome, covariates). From a joint distribution, you can derive conditional and marginal distributions, which are the tools for reasoning about how variables relate to each other.
Conditional distributions
A conditional distribution describes the distribution of one variable given a known value of another. This is central to causal inference, where you often want to know the distribution of an outcome given a particular treatment.
- Discrete case:
- Continuous case:
The denominator must be nonzero (you can only condition on events that have positive probability or density). Conditional distributions are what connect observed associations to potential causal relationships, though moving from association to causation requires additional assumptions.
Marginal distributions
A marginal distribution gives the distribution of a single variable, ignoring (or "marginalizing out") the others.
- Discrete case:
- Continuous case:
You recover the marginal by summing or integrating the joint distribution over all values of the other variable. In causal inference, comparing marginal distributions of outcomes across treatment groups is one way to assess average treatment effects.
Properties of distributions
These summary quantities let you describe and compare distributions without specifying the full PMF or PDF. In causal inference, they're used to quantify treatment effects, measure precision of estimates, and assess relationships between variables.
Expected value
The expected value (mean) measures the center of a distribution. It's the long-run average you'd observe if you could repeat the random process infinitely many times.
- Discrete:
- Continuous:
The most important property of expected value is linearity: for any random variables and and constants and ,
This holds whether or not and are independent. Linearity is used constantly in deriving estimators for causal effects.
Variance and standard deviation
Variance measures how spread out a distribution is around its mean:
A useful computational shortcut: .
The standard deviation is , which has the same units as and is often easier to interpret.
Unlike expectation, variance is not linear. For a constant : . For independent variables: . If they're not independent, you need the covariance term.
In causal inference, variance quantifies the precision of your treatment effect estimates and drives the width of confidence intervals.
Covariance and correlation
Covariance measures the linear association between two random variables:
Positive covariance means the variables tend to move together; negative means they tend to move in opposite directions; zero means no linear relationship (but there could still be a nonlinear one).
Correlation standardizes covariance to the range :
A correlation of means a perfect linear relationship. In causal inference, covariance and correlation help identify potential confounders and assess relationships between variables, though correlation alone never establishes causation.
Moment generating functions
A moment generating function (MGF) uniquely characterizes a distribution (when it exists). For a random variable :
The name comes from the fact that you can extract moments by differentiation:
- (first derivative at )
- (second derivative at )
MGFs are particularly useful for proving that sums of independent random variables follow specific distributions, since the MGF of a sum equals the product of the individual MGFs.
Characteristic functions
A characteristic function (CF) serves a similar role to the MGF but always exists:
where is the imaginary unit. The MGF may not exist for distributions with heavy tails (like the Cauchy distribution), but the CF always does. CFs are used in more advanced theoretical work, such as proving convergence results for sequences of random variables.

Common distributions
These named distributions appear repeatedly across statistics and causal inference. Each one models a particular type of data-generating process.
Uniform distribution
The uniform distribution assigns equal probability to all values in an interval .
- PDF: for , and 0 otherwise
- Expected value:
- Variance:
In causal inference, uniform distributions naturally model completely random treatment assignment, where every unit has the same probability of being assigned to each group.
Exponential distribution
The exponential distribution models waiting times between events in a Poisson process.
- PDF: for
- Expected value:
- Variance:
The exponential distribution has the memoryless property: . The probability of waiting another units doesn't depend on how long you've already waited. This is the only continuous distribution with this property.
Gamma distribution
The gamma distribution generalizes the exponential. While the exponential models the time until the first event, the gamma models the time until the -th event in a Poisson process.
- PDF: for , where is the gamma function
- Expected value:
- Variance:
When , the gamma reduces to the exponential. The gamma is useful for modeling positive, right-skewed continuous variables.
Beta distribution
The beta distribution is defined on , making it natural for modeling probabilities or proportions.
- PDF: for , where is the beta function
- Expected value:
- Variance:
The beta is extremely flexible: depending on and , it can be uniform (), U-shaped, skewed left, skewed right, or symmetric and peaked. It's also the conjugate prior for Bernoulli and binomial likelihoods, which makes it central to Bayesian approaches in causal inference.
Chi-squared distribution
The chi-squared distribution with degrees of freedom is the distribution of the sum of independent squared standard normal variables: if independently, then .
- Expected value:
- Variance:
Chi-squared distributions are used in hypothesis testing (goodness-of-fit tests, tests of independence) and in constructing confidence intervals for variances.
Student's t-distribution
The t-distribution with degrees of freedom arises when you estimate the mean of a normal population using the sample standard deviation instead of the true standard deviation.
- It's symmetric and bell-shaped like the normal, but has heavier tails, meaning extreme values are more likely
- As , the t-distribution converges to the standard normal
- At small sample sizes (say ), the heavier tails matter and produce wider confidence intervals than you'd get using a normal distribution
The t-distribution is the workhorse for hypothesis tests and confidence intervals about means when the population variance is unknown, which is almost always the case in practice.
F-distribution
The F-distribution with and degrees of freedom is the distribution of the ratio of two independent chi-squared variables, each divided by their degrees of freedom.
- It's defined only for positive values and is right-skewed
- Used primarily in ANOVA (comparing means across multiple groups) and in F-tests for comparing variances
In causal inference, F-tests come up when testing whether treatment effects differ across multiple treatment arms simultaneously.
Transformations of random variables
Transformations create new random variables by applying functions to existing ones. In causal inference, you'll encounter transformations when computing propensity scores, inverse probability weights, log-transformed outcomes, and other derived quantities.
Linear transformations
A linear transformation takes the form , where and are constants.
The effect on the distribution's summary statistics is straightforward:
Notice that adding a constant shifts the mean but doesn't affect the variance. Multiplying by scales both the mean and the standard deviation (by ), and scales the variance by .
A common example: standardizing a variable by computing . This is a linear transformation that produces a variable with mean 0 and variance 1.
Functions of random variables
For a general (possibly nonlinear) transformation , finding the distribution of requires more work. Two standard approaches:
- CDF method: Find , then differentiate to get the PDF.
- Change of variables formula: If is monotonic and differentiable, the PDF of is:
The expected value of can be computed without first finding the distribution of , using the law of the unconscious statistician (LOTUS):
- Discrete:
- Continuous:
LOTUS is a practical shortcut you'll use frequently when computing expected values of transformed variables.