Fiveable

📊Actuarial Mathematics Unit 2 Review

QR code for Actuarial Mathematics practice questions

2.6 Simulation methods and Monte Carlo techniques

2.6 Simulation methods and Monte Carlo techniques

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
📊Actuarial Mathematics
Unit & Topic Study Guides

Monte Carlo simulation uses random sampling to model complex systems and estimate outcomes that can't be solved analytically. For actuaries, this is essential: pricing derivatives, assessing risk measures, modeling insurance claims, and evaluating reinsurance contracts all depend on generating thousands (or millions) of scenarios to understand the distribution of possible outcomes.

This section covers the core simulation techniques, variance reduction methods, stochastic process simulation, derivative pricing, risk management applications, and actuarial-specific uses of Monte Carlo methods.

Monte Carlo simulation basics

The core idea behind Monte Carlo simulation is straightforward: build a mathematical model of a system, feed it random inputs drawn from appropriate probability distributions, compute the outputs, and repeat many times. The distribution of those outputs gives you estimates of quantities of interest (expected values, percentiles, probabilities).

Monte Carlo methods are especially valuable when closed-form solutions don't exist, which is common in actuarial work involving complex payoff structures, path dependencies, or correlated risks.

Generating random numbers

Everything in Monte Carlo simulation starts with random number generation. In practice, computers use pseudorandom number generators (PRNGs), which are deterministic algorithms that produce sequences appearing random.

Common PRNGs include:

  • Linear congruential generators (LCGs): Simple and fast, but can have short periods and correlation issues in high dimensions.
  • Mersenne Twister: The standard in most statistical software. It has an extremely long period (21993712^{19937} - 1) and good uniformity properties.
  • Sobol sequences: These are actually quasi-random (low-discrepancy) sequences rather than pseudorandom. They fill the sample space more evenly, which can improve convergence rates.

A good PRNG should have a long period, produce uniformly distributed output, and exhibit low correlation between successive numbers. Once you have uniform random numbers on [0,1][0, 1], you transform them into draws from whatever distribution you need (normal, exponential, Poisson, etc.).

Inverse transform method

The inverse transform method converts uniform random numbers into random variables from a target distribution using the cumulative distribution function (CDF).

Steps:

  1. Generate a uniform random number UUniform(0,1)U \sim \text{Uniform}(0, 1).
  2. Compute X=FX1(U)X = F_X^{-1}(U), where FX1F_X^{-1} is the inverse (quantile function) of the target CDF.
  3. The resulting XX follows the desired distribution.

This works because if XX has CDF FXF_X, then FX(X)Uniform(0,1)F_X(X) \sim \text{Uniform}(0,1), so inverting the process gives you the correct distribution.

Example: To generate an exponential random variable with rate λ\lambda:

  • The CDF is F(x)=1eλxF(x) = 1 - e^{-\lambda x}
  • Setting U=1eλxU = 1 - e^{-\lambda x} and solving: X=1λln(1U)X = -\frac{1}{\lambda}\ln(1 - U)
  • Since 1U1 - U is also uniform on (0,1)(0,1), you can simplify to X=1λln(U)X = -\frac{1}{\lambda}\ln(U)

The method works for any distribution with an invertible CDF (exponential, Pareto, triangular, Weibull, etc.). It fails when the inverse CDF has no closed form, which is where the next method comes in.

Acceptance-rejection method

When the inverse CDF is unavailable or expensive to compute, the acceptance-rejection method generates samples from a target density f(x)f(x) using a simpler proposal distribution g(x)g(x).

Steps:

  1. Choose a proposal distribution g(x)g(x) that you can easily sample from, and find a constant MM such that f(x)Mg(x)f(x) \leq M \cdot g(x) for all xx.
  2. Generate a candidate YY from g(x)g(x).
  3. Generate UUniform(0,1)U \sim \text{Uniform}(0, 1).
  4. If Uf(Y)Mg(Y)U \leq \frac{f(Y)}{M \cdot g(Y)}, accept YY as a draw from f(x)f(x). Otherwise, reject it and return to step 2.
  5. Repeat until you have enough accepted samples.

The efficiency of this method depends on how tight the bound MM is. A smaller MM means a higher acceptance rate. If MM is large, you'll reject most candidates and waste computation. The key is choosing g(x)g(x) to closely match the shape of f(x)f(x).

Variance reduction techniques

A naive Monte Carlo estimate converges at rate O(1/n)O(1/\sqrt{n}), meaning you need 100 times more samples to get one extra decimal digit of precision. Variance reduction techniques improve accuracy without increasing the sample size by exploiting structure in the problem.

These techniques are critical in actuarial applications where high precision matters for pricing and reserving.

Antithetic variates

Antithetic variates reduce variance by introducing negative correlation between pairs of simulated outcomes.

For each uniform draw UU, also use its complement 1U1 - U. If the original simulation produces output Y1Y_1 from UU and output Y2Y_2 from 1U1 - U, the estimator is:

θ^=Y1+Y22\hat{\theta} = \frac{Y_1 + Y_2}{2}

Because Y1Y_1 and Y2Y_2 are negatively correlated, their average has lower variance than the average of two independent samples. This works well when the output is a monotone function of the input random variables.

Example: When pricing a European call option under GBM, generate a standard normal ZZ for one path and use Z-Z for the antithetic path. If one path produces a high stock price (large payoff), the other tends to produce a low stock price, and the pair's average is more stable.

Control variates

Control variates exploit the correlation between your quantity of interest and a related variable whose expected value you already know.

Suppose you want to estimate E[Y]E[Y] and you have a control variate CC with known mean E[C]E[C]. The adjusted estimator is:

θ^CV=Yˉβ(CˉE[C])\hat{\theta}_{CV} = \bar{Y} - \beta(\bar{C} - E[C])

The optimal coefficient β\beta^* equals Cov(Y,C)/Var(C)\text{Cov}(Y, C) / \text{Var}(C), which minimizes the variance of the adjusted estimator. In practice, you estimate β\beta^* from the simulation data.

Example: When pricing an Asian option (payoff based on arithmetic average price), use the geometric average price as a control variate. The geometric Asian option has a closed-form solution, and the two averages are highly correlated, so this can dramatically reduce variance.

Importance sampling

Importance sampling shifts the sampling distribution to focus on regions that contribute most to the quantity being estimated. This is especially useful for estimating rare event probabilities.

Instead of sampling from the original distribution f(x)f(x), sample from an alternative distribution g(x)g(x) and reweight:

Ef[h(X)]=Eg[h(X)f(X)g(X)]E_f[h(X)] = E_g\left[h(X) \frac{f(X)}{g(X)}\right]

The ratio f(X)/g(X)f(X)/g(X) is called the likelihood ratio or Radon-Nikodym derivative. The optimal importance sampling distribution is proportional to h(x)f(x)|h(x)| f(x), though this is rarely available in closed form.

Example: Estimating the probability of aggregate losses exceeding a very high threshold in an insurance portfolio. Under the original distribution, very few simulated scenarios produce such extreme losses. By sampling from a heavier-tailed distribution, you generate more extreme scenarios and correct for the bias through the likelihood ratio, yielding a much more precise estimate.

Stratified sampling

Stratified sampling partitions the input space into disjoint strata and samples independently within each stratum. This guarantees that every region of the input space is represented, reducing variance compared to purely random sampling.

Steps:

  1. Divide the input space into KK non-overlapping strata.
  2. Allocate samples to each stratum (proportional to stratum size, or optimally proportional to stratum size times stratum standard deviation).
  3. Sample independently within each stratum.
  4. Combine the stratum estimates using a weighted average.

Example: When simulating a multi-sector portfolio, stratify by industry sector. This ensures each sector's risk characteristics are captured in every simulation run, rather than relying on random chance to produce a representative mix.

Simulation of stochastic processes

Simulating stochastic processes means generating discrete-time approximations of continuous-time random paths. These simulated paths are the raw material for Monte Carlo pricing, risk measurement, and actuarial modeling.

Generating random numbers, Stochastic Simulation

Brownian motion simulation

Brownian motion (Wiener process) W(t)W(t) is the building block for most continuous-time financial models. Its key properties: it starts at zero, has independent increments, and each increment W(t)W(s)W(t) - W(s) is normally distributed with mean 0 and variance tst - s.

To simulate at discrete times 0=t0<t1<<tn0 = t_0 < t_1 < \cdots < t_n:

  1. Generate independent standard normals Z1,Z2,,ZnZ_1, Z_2, \ldots, Z_n.

  2. Compute each increment: ΔWi=titi1Zi\Delta W_i = \sqrt{t_i - t_{i-1}} \cdot Z_i.

  3. Accumulate: W(ti)=W(ti1)+ΔWiW(t_i) = W(t_{i-1}) + \Delta W_i.

For equal time steps Δt\Delta t, each increment is simply ΔtZi\sqrt{\Delta t} \cdot Z_i. Brownian motion underpins the Black-Scholes model and virtually all diffusion-based financial models.

Geometric Brownian motion

Geometric Brownian motion (GBM) models asset prices that are always positive and have log-normal returns. The SDE is:

dS(t)=μS(t)dt+σS(t)dW(t)dS(t) = \mu S(t) \, dt + \sigma S(t) \, dW(t)

The exact solution is:

S(t)=S(0)exp((μσ22)t+σW(t))S(t) = S(0) \exp\left(\left(\mu - \frac{\sigma^2}{2}\right)t + \sigma W(t)\right)

The σ2/2-\sigma^2/2 term is the Itô correction, which ensures E[S(t)]=S(0)eμtE[S(t)] = S(0)e^{\mu t}.

To simulate GBM at discrete times:

  1. Simulate the Brownian motion path W(t1),W(t2),,W(tn)W(t_1), W(t_2), \ldots, W(t_n).
  2. At each time point, compute S(ti)=S(0)exp((μσ22)ti+σW(ti))S(t_i) = S(0) \exp\left(\left(\mu - \frac{\sigma^2}{2}\right)t_i + \sigma W(t_i)\right).

Equivalently, you can step forward iteratively:

S(ti+1)=S(ti)exp((μσ22)Δt+σΔtZi+1)S(t_{i+1}) = S(t_i) \exp\left(\left(\mu - \frac{\sigma^2}{2}\right)\Delta t + \sigma \sqrt{\Delta t} \, Z_{i+1}\right)

Example: Stock with S(0)=100S(0) = 100, μ=0.10\mu = 0.10, σ=0.20\sigma = 0.20, simulated monthly (Δt=1/12\Delta t = 1/12) over one year. Each step: S(ti+1)=S(ti)exp(0.00708+0.05774Zi+1)S(t_{i+1}) = S(t_i) \exp(0.00708 + 0.05774 \cdot Z_{i+1}).

Jump processes simulation

Jump processes capture sudden discontinuities that diffusion models miss: defaults, catastrophes, market crashes.

Poisson process with rate λ\lambda:

  1. Generate inter-arrival times τiExponential(1/λ)\tau_i \sim \text{Exponential}(1/\lambda).
  2. Arrival times are Tk=i=1kτiT_k = \sum_{i=1}^{k} \tau_i.
  3. The count process N(t)N(t) equals the number of arrivals up to time tt.

Compound Poisson process adds random jump sizes:

  1. Simulate the Poisson process to get jump times T1,T2,T_1, T_2, \ldots
  2. At each jump time, draw a jump size JkJ_k from a specified distribution (e.g., lognormal, exponential).
  3. The process value is X(t)=k:TktJkX(t) = \sum_{k: T_k \leq t} J_k.

Compound Poisson processes are widely used in actuarial science to model aggregate claims, where N(t)N(t) is the claim count and JkJ_k is the individual claim severity.

Stochastic volatility models

GBM assumes constant volatility, which contradicts observed market behavior (volatility smiles, clustering). Stochastic volatility models let volatility itself follow a random process.

The Heston model is the most common:

dS(t)=μS(t)dt+v(t)S(t)dW1(t)dS(t) = \mu S(t) \, dt + \sqrt{v(t)} S(t) \, dW_1(t)

dv(t)=κ(θv(t))dt+ξv(t)dW2(t)dv(t) = \kappa(\theta - v(t)) \, dt + \xi \sqrt{v(t)} \, dW_2(t)

where κ\kappa is the mean-reversion speed, θ\theta is the long-run variance, ξ\xi is the vol-of-vol, and W1,W2W_1, W_2 are Brownian motions with correlation ρ\rho.

Simulation approach:

  1. Generate two correlated standard normals at each time step: Z1Z_1 and Z2=ρZ1+1ρ2Z3Z_2 = \rho Z_1 + \sqrt{1 - \rho^2} Z_3, where Z1,Z3Z_1, Z_3 are independent.

  2. Update variance using Euler discretization (with a floor at zero or using the full truncation scheme to prevent negative variance).

  3. Update the asset price using the current variance.

The Feller condition 2κθ>ξ22\kappa\theta > \xi^2 ensures the variance process stays positive in continuous time, but discretization can still produce negative values, so truncation or reflection schemes are needed in practice.

Pricing financial derivatives

Monte Carlo simulation is one of the primary tools for pricing derivatives, particularly when the payoff structure is complex or the underlying dynamics don't admit closed-form solutions.

European options pricing

European options can only be exercised at expiration. Under risk-neutral pricing, the option value equals the discounted expected payoff.

Steps to price a European call with strike KK and expiration TT:

  1. Simulate NN paths of the underlying asset price S(T)S(T) under the risk-neutral measure (replace μ\mu with the risk-free rate rr in GBM).

  2. For each path ii, compute the payoff: Ci=max(Si(T)K,0)C_i = \max(S_i(T) - K, 0).

  3. Estimate the option price as: C^=erT1Ni=1NCi\hat{C} = e^{-rT} \cdot \frac{1}{N}\sum_{i=1}^{N} C_i.

For a put, replace the payoff with max(KSi(T),0)\max(K - S_i(T), 0).

The standard error of the estimate is σ^C/N\hat{\sigma}_C / \sqrt{N}, where σ^C\hat{\sigma}_C is the sample standard deviation of the discounted payoffs. This tells you how many paths you need for a given precision.

American options pricing

American options allow early exercise, which makes Monte Carlo pricing harder because you need to determine the optimal exercise strategy at each possible exercise date.

The Longstaff-Schwartz least-squares Monte Carlo (LSM) method is the standard approach:

  1. Simulate NN paths of the underlying asset forward through all time steps.

  2. At the final time step, the exercise value is the European payoff.

  3. Working backward through each exercise date:

    • Compute the immediate exercise value for each in-the-money path.
    • Regress the discounted future continuation values against basis functions of the current asset price (e.g., polynomials).
    • The fitted values from the regression estimate the continuation value.
    • Exercise early if the immediate exercise value exceeds the estimated continuation value.
  4. The option price is the discounted average of the realized payoffs across all paths.

The backward induction is what makes this method work: it approximates the optimal stopping rule without requiring a lattice.

Path-dependent options

Path-dependent options have payoffs that depend on the entire price trajectory, not just the terminal value. Monte Carlo is naturally suited for these because you already simulate full paths.

  • Asian options: Payoff depends on the average price. For an Asian call: max(SˉK,0)\max(\bar{S} - K, 0), where Sˉ=1ni=1nS(ti)\bar{S} = \frac{1}{n}\sum_{i=1}^{n} S(t_i).
  • Barrier options: Payoff depends on whether the price crosses a barrier level. A knock-out call becomes worthless if S(t)S(t) hits the barrier; a knock-in call only activates if the barrier is hit.
  • Lookback options: Payoff depends on the maximum or minimum price over the option's life. A floating-strike lookback call pays S(T)min0tTS(t)S(T) - \min_{0 \leq t \leq T} S(t).

For barrier options, be aware that discrete monitoring (checking the barrier only at simulation time steps) introduces bias. The price path could cross the barrier between time steps without being detected. Continuity corrections or finer time grids help mitigate this.

Exotic options valuation

Exotic options have non-standard features that make analytical pricing difficult or impossible. Examples include:

  • Basket options: Payoff depends on a weighted combination of multiple asset prices.
  • Quanto options: Payoff in one currency based on an asset denominated in another currency.
  • Volatility/variance swaps: Payoff based on realized volatility or variance over the contract period.

Monte Carlo handles these by simulating all relevant risk factors (multiple correlated asset prices, exchange rates, volatilities) jointly. The flexibility to model arbitrary payoff functions and correlation structures is what makes Monte Carlo the go-to method for exotic derivatives.

Generating random numbers, Playing with Pseudo-Random Number Generators (Part 2)

Risk management applications

Monte Carlo simulation is central to modern risk management. By generating thousands of scenarios for market variables, credit events, and operational losses, risk managers can estimate the full distribution of potential outcomes rather than relying on single-point estimates.

Value-at-Risk (VaR) estimation

Value-at-Risk quantifies the maximum loss over a given time horizon at a specified confidence level. For example, a 10-day 99% VaR of $5 million means there's a 1% chance of losing more than $5 million over 10 days.

Monte Carlo VaR estimation:

  1. Simulate NN scenarios for all risk factors (asset prices, interest rates, exchange rates, etc.) over the chosen time horizon.

  2. Revalue the portfolio under each scenario to get NN simulated portfolio values.

  3. Compute the profit/loss for each scenario: P&Li=ViV0\text{P\&L}_i = V_i - V_0.

  4. Sort the P&L values from worst to best.

  5. The VaR at confidence level α\alpha is the loss at the (1α)(1-\alpha)-th percentile. For 95% VaR with 10,000 scenarios, it's the 500th worst loss.

Monte Carlo VaR is more flexible than parametric VaR (which assumes normality) and historical VaR (which is limited to past data), but it's computationally more expensive.

Expected shortfall calculation

Expected shortfall (ES), also called conditional VaR (CVaR), answers the question: given that we're in the worst α%\alpha\% of scenarios, what's the average loss?

To compute ES from a Monte Carlo simulation:

  1. Follow the same steps as VaR to get the sorted P&L distribution.
  2. Identify all scenarios with losses exceeding the VaR threshold.
  3. Average those losses.

ES is considered a superior risk measure because it's coherent (it satisfies subadditivity, meaning diversification never increases risk), while VaR is not. Regulatory frameworks like Basel III/IV increasingly require ES for market risk capital calculations.

Example: If the 95% VaR is $1 million and the average of all losses beyond that threshold is $1.5 million, then the 95% ES is $1.5 million.

Credit risk simulation

Credit risk modeling with Monte Carlo typically involves simulating default events across a portfolio of obligors, accounting for correlations between defaults.

Common approach (Gaussian copula model):

  1. Assign each obligor a default probability based on credit rating and a correlation structure.

  2. Generate correlated normal random variables for each obligor (using Cholesky decomposition of the correlation matrix).

  3. An obligor defaults if its simulated variable falls below its default threshold (derived from the default probability via the normal CDF).

  4. For each defaulted obligor, simulate the recovery rate.

  5. Compute portfolio losses as defaultsExposure×(1Recovery Rate)\sum_{\text{defaults}} \text{Exposure} \times (1 - \text{Recovery Rate}).

  6. Repeat across many scenarios to build the loss distribution.

From this distribution, you can estimate expected loss (EL), unexpected loss (UL), and credit VaR.

Operational risk modeling

Operational risk covers losses from failed processes, systems, people, or external events (fraud, cyberattacks, natural disasters). The standard Monte Carlo approach uses a loss distribution approach (LDA):

  1. For each risk category, fit a frequency distribution (e.g., Poisson, negative binomial) to model how many loss events occur per period.

  2. Fit a severity distribution (e.g., lognormal, generalized Pareto) to model the size of each loss event.

  3. In each simulation:

    • Draw the number of events from the frequency distribution.
    • For each event, draw a loss amount from the severity distribution.
    • Sum to get the total loss for that risk category.
  4. Aggregate across risk categories (accounting for correlations if needed).

  5. Build the total operational loss distribution and compute VaR or ES.

The heavy-tailed nature of operational losses makes variance reduction techniques (especially importance sampling) particularly valuable here.

Simulation in actuarial science

Actuaries rely on Monte Carlo simulation throughout the insurance lifecycle: pricing products, setting reserves, managing solvency, and evaluating risk transfer mechanisms.

Claim frequency and severity

Insurance loss modeling starts with two components:

  • Claim frequency: The number of claims in a period. Typically modeled with discrete distributions like Poisson (λ\lambda) or negative binomial (which adds overdispersion).
  • Claim severity: The dollar amount of each claim. Typically modeled with continuous distributions like lognormal, Pareto, or gamma, depending on the tail behavior of the line of business.

Simulation steps:

  1. Draw the number of claims NN from the frequency distribution.
  2. For each of the NN claims, draw a severity XiX_i from the severity distribution.
  3. The total loss for the period is S=i=1NXiS = \sum_{i=1}^{N} X_i.

This is a compound distribution model. Because NN is random, the total loss SS doesn't have a simple closed-form distribution in most cases, which is exactly why simulation is needed.

Aggregate loss distributions

The aggregate loss distribution describes the total losses across an entire portfolio. Building it via Monte Carlo:

  1. For each policy (or homogeneous group of policies), simulate claim frequency and severity as above.
  2. Sum across all policies to get the portfolio's total loss for one scenario.
  3. Repeat for many scenarios (typically 10,000 to 1,000,000+).
  4. The empirical distribution of total losses gives you percentiles, means, and tail probabilities.

From the aggregate loss distribution, actuaries can:

  • Set premiums (e.g., expected loss plus a risk loading).
  • Determine reserves (e.g., the 75th or 95th percentile of future losses).
  • Evaluate reinsurance needs (how much tail risk to transfer).

Ruin probability estimation

Ruin probability is the chance that an insurer's surplus drops below zero over a given time horizon. The classical surplus process is:

U(t)=u+ctS(t)U(t) = u + ct - S(t)

where uu is the initial surplus, cc is the premium rate per unit time, and S(t)S(t) is the aggregate claims up to time tt.

Monte Carlo estimation:

  1. Set the initial surplus uu, premium rate cc, and claim process parameters.
  2. Simulate the claim arrival times and amounts over the time horizon.
  3. At each claim arrival, update the surplus. If U(t)<0U(t) < 0 at any point, record a ruin event.
  4. Repeat for NN scenarios.
  5. The estimated ruin probability is ψ^=number of ruin scenariosN\hat{\psi} = \frac{\text{number of ruin scenarios}}{N}.

For finite-time ruin probabilities, you stop at the time horizon. For infinite-time ruin, you simulate over a very long period. Since ruin is often a rare event, importance sampling can significantly improve the precision of the estimate.

Reinsurance contract evaluation

Reinsurance transfers part of the insurer's risk to a reinsurer. Monte Carlo simulation lets you compare different reinsurance structures by simulating their impact on the insurer's net position.

Common reinsurance types:

  • Quota share: The reinsurer takes a fixed percentage of every claim. If the quota is 30%, the ceded loss on each claim XX is 0.3X0.3X.
  • Excess-of-loss (XL): The reinsurer covers individual claims above a retention dd up to a limit \ell. Ceded loss per claim: min(max(Xd,0),)\min(\max(X - d, 0), \ell).
  • Stop-loss: The reinsurer covers aggregate losses above a threshold.

Simulation steps:

  1. Simulate the full set of claims (frequency and severity) for the portfolio.
  2. Apply the reinsurance terms to each claim (or to the aggregate, depending on the contract type) to split losses into retained and ceded portions.
  3. Compute the net loss to the cedent for each scenario.
  4. Compare the distribution of net losses under different reinsurance structures.

Example: For an XL contract with retention $1 million and limit $5 million, the ceded loss per claim is min(max(X1,000,000,  0),  5,000,000)\min(\max(X - 1{,}000{,}000, \; 0), \; 5{,}000{,}000). A claim of $3 million cedes $2 million. A claim of $8 million cedes $5 million (the limit). A claim of $500,000 cedes nothing.

Simulation efficiency and accuracy

Getting reliable Monte Carlo results requires attention to both computational efficiency and statistical accuracy.

Convergence rate: The standard Monte Carlo estimator converges at rate O(1/N)O(1/\sqrt{N}). Doubling precision requires quadrupling the number of simulations. This makes variance reduction techniques (covered above) essential for practical applications.

Confidence intervals: For an estimate θ^\hat{\theta} based on NN simulations with sample standard deviation σ^\hat{\sigma}, an approximate 95% confidence interval is:

θ^±1.96σ^N\hat{\theta} \pm 1.96 \cdot \frac{\hat{\sigma}}{\sqrt{N}}

Always report confidence intervals alongside point estimates. A Monte Carlo estimate without an error bound is incomplete.

Quasi-Monte Carlo (QMC): Replacing pseudorandom numbers with low-discrepancy sequences (Sobol, Halton) can achieve convergence rates closer to O(1/N)O(1/N) in moderate dimensions, a substantial improvement. QMC works best when the integrand is smooth and the dimension isn't too high.

Parallel computing: Monte Carlo simulations are "embarrassingly parallel" since each path is independent. Distributing paths across multiple processors or GPUs provides near-linear speedup, making large-scale simulations feasible.

Practical tips for accuracy:

  • Run a pilot simulation with a small NN to estimate variance before committing to a full run.
  • Use variance reduction techniques appropriate to your problem (antithetic variates for monotone payoffs, control variates when a related closed-form solution exists, importance sampling for rare events).
  • Check convergence by plotting the running estimate against NN. If it hasn't stabilized, you need more samples or better variance reduction.
  • Validate your simulation against known analytical results when possible (e.g., Black-Scholes prices for European options under GBM).