Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Monte Carlo methods are central to modern financial mathematics. From pricing exotic derivatives to managing portfolio risk, these techniques let you tackle problems that would be impossible to solve analytically. You're being tested on your ability to understand when to apply each method, why certain techniques reduce variance, and how sampling strategies connect to convergence rates and computational efficiency.
The core principles here, random sampling, variance reduction, Markov chain convergence, and sequential estimation, appear throughout quantitative finance. Don't just memorize algorithm names; know what problem each method solves and when you'd choose one over another. If an exam question describes a high-dimensional integral or a complex posterior distribution, you should immediately recognize which Monte Carlo approach fits best.
These techniques form the building blocks of Monte Carlo simulation. The fundamental idea is using random samples to approximate quantities that are difficult or impossible to compute directly.
Monte Carlo integration estimates integrals using random sampling. It's particularly powerful when analytical solutions don't exist or are computationally intractable.
Rejection sampling generates samples from a target distribution by comparing proposals against a known envelope distribution.
The acceptance rate equals , so a tight envelope is critical. If your proposal poorly approximates the target, most samples get rejected and the method becomes very slow. Rejection sampling is conceptually simple and useful as a starting point, but for complex distributions you'll typically need MCMC or importance sampling instead.
Stratified sampling divides the sample space into distinct, non-overlapping strata and draws samples from each subgroup. This guarantees coverage across all regions rather than hoping random chance provides it.
Compare: Monte Carlo Integration vs. Stratified Sampling: both estimate integrals through sampling, but stratified sampling imposes structure on where samples are drawn. Use stratified sampling when you know the integrand behaves differently across regions; use basic Monte Carlo when the function is relatively uniform or the structure is unknown.
Reducing variance means getting more accurate estimates with fewer samples. These methods exploit problem structure to make simulations converge faster without increasing computational cost proportionally.
Importance sampling concentrates samples in the regions that contribute most to the integral. Instead of sampling from the original distribution , you sample from a biased proposal distribution and reweight each sample by the ratio .
These are two of the most commonly used variance reduction tools in practice.
Both techniques are essential in practice where computational budgets are limited and precision requirements are high.
Quasi-Monte Carlo replaces pseudorandom numbers with low-discrepancy sequences (such as Sobol or Halton sequences) that fill the sample space more uniformly than random points would.
Compare: Importance Sampling vs. Quasi-Monte Carlo: both improve convergence but through different mechanisms. Importance sampling changes what you sample (shifting the distribution toward high-impact regions); quasi-Monte Carlo changes how you generate sample points (replacing randomness with deterministic uniformity). For rare-event problems, importance sampling is the right tool. For smooth integrands in moderate dimensions, quasi-Monte Carlo often wins.
MCMC methods construct a random walk that eventually samples from your target distribution. The key insight is that you don't need to know the normalizing constant. Only ratios of probabilities matter.
MCMC generates dependent samples from complex, high-dimensional distributions by constructing a Markov chain whose stationary distribution is the target.
This is the most general-purpose MCMC method. It works by proposing candidate moves and accepting or rejecting them probabilistically.
Notice that only appears as a ratio , so any normalizing constant cancels out. This is why Metropolis-Hastings works even when you only know the target up to a proportionality constant.
Acceptance rate tuning matters a lot. In high-dimensional problems, acceptance rates around 20-40% often indicate efficient exploration. Too high means your proposals are too timid (small steps); too low means proposals are too ambitious (large steps that keep getting rejected).
Gibbs sampling updates each variable one at a time, drawing from its full conditional distribution while holding all other variables fixed.
Random walk methods explore the sample space through incremental steps in random directions. Many MCMC implementations, including the basic Metropolis algorithm, are random walks at their core.
Compare: Metropolis-Hastings vs. Gibbs Sampling: both are MCMC methods, but Gibbs requires tractable conditional distributions while Metropolis-Hastings only needs unnormalized density ratios. Choose Gibbs when you can derive conditionals analytically (common in Bayesian conjugate models); use Metropolis-Hastings for more general problems where conditionals aren't available in closed form.
These techniques handle problems where distributions evolve over time or where you need to track changing states. The core challenge is maintaining accurate approximations as new information arrives.
Particle filters represent posterior distributions using a set of weighted samples (called "particles") that propagate through a state-space model over time.
The basic particle filter cycle at each time step:
Sequential Monte Carlo (SMC) generalizes particle filtering to sample from sequences of distributions connected by importance sampling and resampling. It's the broader framework of which particle filters are one application.
Compare: Particle Filters vs. Sequential Monte Carlo: particle filters are a specific application of SMC to state-space models (tracking a latent state through time). SMC is the broader framework that also handles tempering between distributions, rare-event simulation, and model comparison. Think of particle filters as your go-to for tracking problems, and SMC as the general toolkit.
Monte Carlo ideas extend beyond integration to finding optimal solutions and designing efficient experiments. Randomization helps escape local optima and ensures comprehensive exploration of input spaces.
Simulated annealing is a global optimization method that uses controlled randomness to avoid getting trapped in local optima.
Latin hypercube sampling ensures each input variable spans its full range by dividing each dimension into equal intervals and placing exactly one sample point in each interval per dimension.
Compare: Latin Hypercube Sampling vs. Stratified Sampling: both impose structure on sampling, but they partition differently. Latin hypercube ensures marginal coverage for each variable individually (each variable's range is fully represented). Stratified sampling partitions the joint space into cells. Use Latin hypercube for input uncertainty analysis when you care about each variable's effect; use stratified sampling when you understand the joint structure of the integrand.
The bootstrap estimates sampling distributions by resampling with replacement from observed data, requiring no parametric assumptions about the underlying distribution.
This gives you confidence intervals and standard errors without needing to assume normality or derive analytical formulas. It's widely used in backtesting trading strategies and validating risk models.
| Concept | Best Examples |
|---|---|
| Basic Integration | Monte Carlo Integration, Rejection Sampling |
| Variance Reduction | Importance Sampling, Control Variates, Antithetic Variates, Stratified Sampling |
| Deterministic Sequences | Quasi-Monte Carlo, Latin Hypercube Sampling |
| MCMC Sampling | Metropolis-Hastings, Gibbs Sampling, Random Walk Methods |
| Dynamic/Sequential | Particle Filters, Sequential Monte Carlo |
| Optimization | Simulated Annealing |
| Statistical Inference | Bootstrap Method |
| Rare Event Simulation | Importance Sampling, Sequential Monte Carlo |
Both importance sampling and stratified sampling reduce variance. What is the fundamental difference in how they achieve this, and when would you choose one over the other?
You need to sample from a posterior distribution where you can evaluate the unnormalized density but cannot compute the normalizing constant. Which two methods are designed specifically for this situation, and what distinguishes them?
Compare quasi-Monte Carlo methods with standard Monte Carlo integration: what convergence rate improvement do you gain, and in what situations does this advantage diminish?
A financial model requires tracking a latent state variable through time with non-Gaussian dynamics. Which method is most appropriate, and how does it differ from static MCMC approaches?
FRQ-style: Explain why the Metropolis-Hastings acceptance probability guarantees convergence to the target distribution , and describe how the choice of proposal distribution affects computational efficiency.