Fiveable

📊Actuarial Mathematics Unit 5 Review

QR code for Actuarial Mathematics practice questions

5.1 Individual and collective risk models

5.1 Individual and collective risk models

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
📊Actuarial Mathematics
Unit & Topic Study Guides

Risk models are the core tools actuaries use to quantify and manage financial losses. They combine claim frequency and severity distributions to estimate aggregate claims, which then drive insurance pricing, reserving, and solvency calculations. This section covers the two main modeling frameworks (individual and collective), compound distributions, approximation methods, and practical applications.

Types of risk models

Risk models quantify potential financial losses for actuarial applications like insurance pricing, reserving, and capital requirements. They fall into two main categories: individual models and collective models, each with different assumptions and trade-offs.

Individual vs collective models

Individual models analyze each insured unit separately, accounting for that unit's specific characteristics and risk factors. They give you granular insight into how different policyholders contribute to overall risk, but they demand more data and heavier computation.

Collective models treat the entire portfolio as a single entity, modeling aggregate claims without distinguishing between individual risks. They're simpler and more tractable, but you lose the ability to see risk at the policyholder level.

The choice between them depends on your goal. If you need risk-based pricing for individual policies, you want an individual model. If you need a quick estimate of total portfolio claims for reserving or capital purposes, a collective model is often sufficient.

Assumptions and limitations

Both types of models rest on simplifying assumptions:

  • Independence between claims (one claim doesn't influence another)
  • Stationarity of claim distributions over time (the underlying process doesn't shift)
  • Homogeneity of risk within groups or the portfolio

These assumptions rarely hold perfectly in practice. Claims can be correlated (think natural disasters affecting many policies at once), distributions shift as portfolios change, and risk is almost never truly homogeneous. Limitations also arise from data quality issues, parameter estimation error, and the inherent randomness of claims. That's why model selection, validation, and sensitivity analysis are not optional steps.

Individual risk models

Individual risk models assess each insured unit's claims experience separately, incorporating policyholder-specific risk characteristics and exposure measures.

Structure of individual models

An individual model has two core components:

  1. Frequency model for the number of claims per unit, using discrete distributions (Poisson, negative binomial, or binomial)
  2. Severity model for the size of each claim, using continuous distributions (gamma, lognormal, Pareto, or Weibull)

The frequency distribution you pick depends on the nature of the claims process. Poisson works well when claims are rare and independent. Negative binomial handles overdispersion (when variance exceeds the mean). Binomial fits situations with a fixed number of exposure units and a constant claim probability.

Key components and variables

  • Claim frequency: the count of claims within a specified period, modeled with a discrete distribution
  • Claim severity: the dollar amount of each claim, modeled with a continuous distribution
  • Risk factors: policyholder characteristics (age, gender, occupation) and policy features (deductibles, coverage limits) that influence both frequency and severity
  • Exposure: the measure of risk for each unit, such as number of policies, sum insured, or coverage duration

Modeling individual claim amounts

Claim size distributions tend to be right-skewed with heavy tails, meaning most claims are small but a few are very large. Choosing the right severity distribution matters a lot for capturing this behavior.

Common choices and when to use them:

  • Gamma: flexible, two-parameter, good for moderately skewed data
  • Lognormal: appropriate when claim sizes result from many small multiplicative effects
  • Pareto: captures heavy tails where large claims occur more frequently than gamma or lognormal would predict
  • Weibull: versatile for varying hazard rates (increasing, decreasing, or constant)

Parameters are estimated from historical data using maximum likelihood estimation (MLE) or the method of moments (MoM). Goodness-of-fit tests (like the Kolmogorov-Smirnov or Anderson-Darling test) help you decide which distribution fits best.

Tail risk measures such as Value-at-Risk (VaR) and Expected Shortfall (ES) quantify the potential for extreme claims and directly inform risk management decisions.

Aggregate claims distribution

The aggregate claims distribution combines frequency and severity to determine total claims for the portfolio over a given period. If NN is the number of claims and XiX_i is the size of claim ii, then the aggregate loss is:

S=i=1NXiS = \sum_{i=1}^{N} X_i

Computing this distribution exactly requires convolution (repeatedly convolving the severity distribution with itself for each possible value of NN) or Monte Carlo simulation. Key summary measures include:

  • Expected value of SS for pricing
  • Variance and higher moments for understanding volatility
  • Quantiles (e.g., the 99.5th percentile) for capital and solvency calculations

These outputs feed directly into risk premiums, stop-loss premiums, and capital allocation decisions.

Collective risk models

Collective risk models focus on aggregate claims from the entire portfolio, without tracking which policyholder generated which claim. They share the same frequency-severity building blocks as individual models but apply them at the portfolio level.

Structure of collective models

Like individual models, collective models have two components:

  1. Frequency model for the total number of claims across the portfolio
  2. Severity model for individual claim amounts (assumed i.i.d.)

These are combined via compound distributions to produce the aggregate claims distribution. The key structural difference from individual models is that you don't model each policyholder separately.

Individual vs collective models, 16. Risk Management Planning – Project Management

Key components and variables

  • Claim frequency: total claim count for the portfolio in a given period
  • Claim severity: individual claim amounts, assumed independent and identically distributed
  • Exposure: portfolio-level measures like total policies in force, aggregate sum insured, or total premium income
  • Risk parameters: distribution parameters (mean, variance, shape, scale) estimated from historical data

Claim frequency distributions

Poisson distribution (parameter λ\lambda): The default choice for claim counts. It assumes claims arrive independently at a constant rate. The mean equals the variance (E[N]=Var[N]=λ\mathbb{E}[N] = \text{Var}[N] = \lambda). Works well when risks are homogeneous.

Negative binomial distribution (parameters rr and pp): Use this when claim counts show overdispersion, meaning the variance exceeds the mean. Overdispersion often arises from heterogeneity in the portfolio or contagion effects. You can think of it as a Poisson distribution where λ\lambda itself is random (gamma-distributed).

Binomial distribution (parameters nn and pp): Appropriate when you have a fixed number of policies nn, each with a constant probability pp of generating exactly one claim. Less common in practice because it caps the number of claims at nn.

Claim severity distributions

Gamma (shape α\alpha, scale β\beta): The shape parameter controls skewness and the scale parameter controls spread. Flexible enough for many moderate-tailed claim distributions.

Lognormal (parameters μ\mu, σ\sigma): If ln(X)\ln(X) is normally distributed, then XX is lognormal. Useful when claims arise from multiplicative processes. Heavier-tailed than gamma for the same mean and variance.

Pareto (shape α\alpha, scale θ\theta): The go-to distribution for heavy-tailed data. Large claims show up more often than gamma or lognormal would suggest. Commonly used for reinsurance and catastrophe modeling.

Weibull (shape kk, scale λ\lambda): Offers flexibility in the hazard rate. When k<1k < 1, the hazard decreases; when k>1k > 1, it increases; when k=1k = 1, it's constant (reducing to the exponential distribution).

Aggregate claims distribution

The aggregate claims distribution combines frequency and severity through compound distribution techniques:

  • Compound Poisson: NPoisson(λ)N \sim \text{Poisson}(\lambda), claim amounts i.i.d. This is the most widely used collective model due to its analytical tractability.
  • Compound negative binomial: NNegBin(r,p)N \sim \text{NegBin}(r, p), claim amounts i.i.d. Accommodates overdispersion in frequency.

Computing these distributions exactly can be done using Panjer's recursion, other recursive formulas, or Fast Fourier Transform (FFT) methods, each discussed further below.

Compound distributions

A compound distribution models aggregate claims as a sum of a random number of i.i.d. claim amounts. It's the mathematical engine behind collective risk models.

Definition and properties

Let NN be the random number of claims and X1,X2,,XNX_1, X_2, \ldots, X_N be i.i.d. claim amounts. The aggregate claims variable is:

S=i=1NXiS = \sum_{i=1}^{N} X_i

The distribution of SS is called a compound distribution. Two results you need to know:

  • Expected value: E[S]=E[N]E[X]\mathbb{E}[S] = \mathbb{E}[N] \cdot \mathbb{E}[X]
  • Variance: Var(S)=E[N]Var(X)+Var(N)(E[X])2\text{Var}(S) = \mathbb{E}[N] \cdot \text{Var}(X) + \text{Var}(N) \cdot (\mathbb{E}[X])^2

The variance formula has two terms. The first captures randomness in claim sizes (even if the number of claims were fixed). The second captures randomness in the number of claims (even if every claim were the same size). Understanding this decomposition helps you see which source of uncertainty dominates in a given portfolio.

Poisson compound distribution

When NPoisson(λ)N \sim \text{Poisson}(\lambda), the variance formula simplifies because E[N]=Var(N)=λ\mathbb{E}[N] = \text{Var}(N) = \lambda:

Var(S)=λE[X2]\text{Var}(S) = \lambda \cdot \mathbb{E}[X^2]

This follows from expanding the general formula using Var(N)=λ\text{Var}(N) = \lambda and the identity Var(X)+(E[X])2=E[X2]\text{Var}(X) + (\mathbb{E}[X])^2 = \mathbb{E}[X^2].

The compound Poisson model is the workhorse of collective risk theory. Its popularity comes from the Poisson process's memoryless property and the fact that sums and thinnings of Poisson processes remain Poisson.

Negative binomial compound distribution

When NNegBin(r,p)N \sim \text{NegBin}(r, p), the model accommodates overdispersion in claim counts. Since Var(N)>E[N]\text{Var}(N) > \mathbb{E}[N] for the negative binomial, the second term in the variance formula contributes more, reflecting the extra uncertainty from heterogeneous claim frequencies.

The PMF of the aggregate claims can be computed using the same recursive or numerical methods as the compound Poisson case.

Recursion formulas

Exact computation of compound distribution probabilities is feasible using recursive methods. The most important is Panjer's recursion.

Panjer's recursion applies when the frequency distribution belongs to the (a,b,0)(a, b, 0) class, meaning its probabilities satisfy:

P(N=n)=(a+bn)P(N=n1),n=1,2,3,P(N = n) = \left(a + \frac{b}{n}\right) P(N = n-1), \quad n = 1, 2, 3, \ldots

This class includes Poisson (a=0,b=λa = 0, b = \lambda), negative binomial (a=1p,b=(r1)(1p)a = 1 - p, b = (r-1)(1-p)), and binomial (a=p/(1p),b=(n+1)p/(1p)a = -p/(1-p), b = (n+1)p/(1-p)).

The recursion computes P(S=x)P(S = x) for discretized claim amounts by expressing each probability in terms of previously computed values. This avoids the need for full convolution and is computationally efficient.

Other recursive approaches include De Pril's recursion and Hipp's recursion, which offer alternatives when Panjer's formula is less convenient.

Individual vs collective models, How to develop a more accurate risk prediction model when there are few events | The BMJ

Approximations for aggregate claims

When exact computation of the aggregate claims distribution is too complex or the portfolio is large, approximation methods provide practical alternatives.

Normal approximation

The Central Limit Theorem (CLT) justifies approximating SS by a normal distribution with matching mean and variance:

SNormal(E[S],  Var(S))S \approx \text{Normal}\left(\mathbb{E}[S],\; \text{Var}(S)\right)

This works well when the number of claims is large and individual claim amounts aren't extremely heavy-tailed. The main weakness: the normal distribution is symmetric, so it underestimates the probability of large aggregate losses in the right tail. For solvency calculations where tail accuracy matters, the normal approximation can be dangerously optimistic.

Normal power approximation

The normal power approximation (NPA) improves on the normal approximation by incorporating the skewness (and sometimes kurtosis) of the aggregate claims distribution. It applies a polynomial transformation to a standard normal variable to capture asymmetry.

The NPA is a good middle ground when the claim size distribution is moderately skewed and the portfolio is large enough for the CLT to partially apply, but not so large that the plain normal approximation is sufficient.

Translated gamma approximation

This method matches the first three moments (mean, variance, skewness) of the aggregate claims distribution to a translated gamma distribution. The translated gamma has three parameters, giving it enough flexibility to capture skewness that the normal approximation misses.

The "translated" part means the distribution is shifted along the real line, which allows it to accommodate situations where aggregate claims could theoretically take negative values (e.g., after accounting for deductibles or reinsurance recoveries). In practice, this approximation tends to perform better than the normal approximation in the tails.

Simulation techniques

Monte Carlo simulation is the most flexible approach. The basic procedure:

  1. Draw a random value of NN from the frequency distribution
  2. For each of the NN claims, draw a random claim amount from the severity distribution
  3. Sum the claim amounts to get one realization of SS
  4. Repeat steps 1-3 many thousands of times
  5. Use the empirical distribution of the simulated SS values as your approximation

Simulation handles complex dependencies, copulas, and non-standard distributions that analytical methods can't easily accommodate. The trade-off is computational cost, which can be reduced using variance reduction techniques like importance sampling, stratified sampling, or antithetic variates.

Applications of risk models

Risk models aren't just theoretical constructs. They drive real decisions across the insurance industry. The right model depends on the portfolio, available data, and the specific business question.

Insurance pricing and reserving

For pricing, risk models estimate the pure premium (expected claims cost per policy). Collective models set the overall premium level for a portfolio, while individual models enable risk-based pricing where each policyholder's premium reflects their specific risk profile.

For reserving, risk models estimate future claims liabilities so the insurer sets aside adequate funds. Stochastic reserving techniques like bootstrapping or Mack's chain ladder method go beyond point estimates by quantifying the uncertainty around reserve estimates.

Reinsurance and risk sharing

Reinsurance transfers part of an insurer's risk to a reinsurer in exchange for a premium. Risk models are essential for:

  • Designing reinsurance contracts (excess-of-loss, quota share, stop-loss)
  • Pricing those contracts by quantifying expected ceded claims
  • Optimizing the reinsurance program by balancing retained risk against reinsurance cost, subject to the insurer's risk appetite and budget constraints

Solvency and capital requirements

Regulatory frameworks like Solvency II (Europe) require insurers to hold enough capital to survive adverse scenarios. Risk models feed into this process by:

  • Assessing capital requirements for underwriting risk, market risk, and operational risk
  • Computing VaR and Tail Value-at-Risk (TVaR) at specified confidence levels (e.g., 99.5% for Solvency II)
  • Supporting stress testing and scenario analysis to evaluate balance sheet resilience under extreme events

Model selection and validation

Picking the right model is as important as running it correctly. The process involves:

  1. Candidate comparison: Fit several plausible models to historical data
  2. Goodness-of-fit testing: Use statistical tests (chi-square, K-S, Q-Q plots) to assess fit quality
  3. Information criteria: Compare models using AIC or BIC, which balance fit quality against model complexity (penalizing extra parameters)
  4. Validation: Back-test against historical data and perform out-of-sample testing to check predictive accuracy
  5. Sensitivity analysis: Vary key assumptions and parameters to understand how much the outputs change, identifying which inputs matter most

No model is perfect. The goal is to find one that's accurate enough for the decision at hand while remaining tractable and interpretable.