Fiveable

📊Actuarial Mathematics Unit 5 Review

QR code for Actuarial Mathematics practice questions

5.3 Aggregate loss distributions and stop-loss reinsurance

5.3 Aggregate loss distributions and stop-loss reinsurance

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
📊Actuarial Mathematics
Unit & Topic Study Guides

Aggregate Loss Distributions

Aggregate loss distributions model the total losses an insurance company incurs over a defined period. By combining a frequency distribution (how many claims occur) with a severity distribution (how large each claim is), actuaries can estimate the full range of possible total losses. These models are foundational for setting premiums, establishing reserves, and structuring reinsurance.

Stop-loss reinsurance builds directly on these models. It protects insurers against aggregate losses that exceed a specified threshold, transferring tail risk to a reinsurer. Understanding both the aggregate loss distribution and the mechanics of stop-loss coverage is essential for managing catastrophic or unexpectedly large losses.

Aggregate Loss Distributions

An aggregate loss SS represents the total claims paid over a specific period (a policy term, quarter, or year). It's defined as:

S=X1+X2++XNS = X_1 + X_2 + \cdots + X_N

where NN is the random number of claims (frequency) and XiX_i is the size of the ii-th claim (severity). The three building blocks are the frequency distribution, the severity distribution, and the compound model that ties them together.

Models of Aggregate Losses

Aggregate loss models combine a frequency distribution for NN with a severity distribution for each XiX_i to produce the distribution of SS. The most common compound models are:

  • Compound Poisson — the default starting point; assumes claims arrive independently at a constant rate
  • Compound negative binomial — used when claim counts show overdispersion (variance exceeds the mean)
  • Compound binomial — used when there's a fixed, known number of exposure units

Selecting the right model depends on the characteristics of the risk. Actuaries rely on historical data, goodness-of-fit tests, industry benchmarks, and expert judgment to choose.

Frequency Distributions

Frequency distributions describe the probability of observing a specific number of claims in a given period. The three standard choices each suit different situations:

  • Poisson — Claims occur independently at a constant average rate λ\lambda. The mean equals the variance (E[N]=Var(N)=λE[N] = \text{Var}(N) = \lambda). Commonly applied in auto insurance where individual claim events are roughly independent.
  • Negative binomial — Appropriate when there is overdispersion, meaning Var(N)>E[N]\text{Var}(N) > E[N]. This often arises in health insurance, where unobserved heterogeneity among insureds causes extra variability in claim counts.
  • Binomial — Used when there is a fixed number of exposure units nn, each with the same claim probability qq. Typical in group life insurance where you know exactly how many lives are covered.

Severity Distributions

Severity distributions model the size or cost of individual claims. Common choices include:

  • Lognormal — Works well for claims that are mostly moderate but occasionally very large (property damage)
  • Gamma — Flexible shape; often used for medical claims
  • Pareto — Heavy-tailed; suited for liability or catastrophe claims where extreme values are more likely
  • Weibull — Useful when the hazard rate changes over time

Actuaries assess fit using goodness-of-fit tests (Kolmogorov-Smirnov, Anderson-Darling, chi-square) and graphical tools like Q-Q plots and histograms overlaid with the fitted density.

Compound Poisson Distribution

The compound Poisson model assumes NPoisson(λ)N \sim \text{Poisson}(\lambda) and the XiX_i are i.i.d. with common distribution. Its key moments have clean formulas:

  • E[S]=λE[X]E[S] = \lambda \cdot E[X]
  • Var(S)=λE[X2]\text{Var}(S) = \lambda \cdot E[X^2]

That variance formula is worth memorizing. Notice it uses the second raw moment E[X2]E[X^2], not the variance of XX. This follows from the law of total variance.

The compound Poisson model also has a useful additivity property: if two independent portfolios each follow compound Poisson distributions, their combined aggregate loss is also compound Poisson. This makes it convenient for merging books of business.

Compound Negative Binomial Distribution

When claim count data shows overdispersion, the compound negative binomial is a better fit. Here NNegBin(r,p)N \sim \text{NegBin}(r, p) with E[N]=r(1p)/pE[N] = r(1-p)/p and Var(N)=r(1p)/p2\text{Var}(N) = r(1-p)/p^2.

The moments of SS are:

  • E[S]=E[N]E[X]E[S] = E[N] \cdot E[X]
  • Var(S)=E[N]Var(X)+Var(N)(E[X])2\text{Var}(S) = E[N] \cdot \text{Var}(X) + \text{Var}(N) \cdot (E[X])^2

This second term, Var(N)(E[X])2\text{Var}(N) \cdot (E[X])^2, captures the extra variability from the overdispersed claim count. Compared to the compound Poisson, aggregate losses will have a heavier right tail.

Models of aggregate losses, Statistics/Distributions/Poisson - Wikibooks, open books for an open world

Compound Binomial Distribution

The compound binomial model applies when there are exactly nn exposure units, each independently generating a claim with probability qq. So NBin(n,q)N \sim \text{Bin}(n, q).

This model is less commonly used than the other two because most real portfolios don't have a strict fixed number of independent units. However, it's natural for group coverages where the roster is known. Its PMF can be computed via convolutions or recursive methods.

Convolutions vs. Recursive Methods

Once you've chosen frequency and severity distributions, you need to actually compute the distribution of SS. Two main approaches exist:

Convolutions compute the distribution of SS by directly combining the severity distribution with itself, weighted by the frequency probabilities. For a discrete severity with possible values 0,1,2,0, 1, 2, \ldots, the nn-fold convolution gives the distribution of the sum of nn claims. This is conceptually straightforward but computationally expensive for large portfolios.

Panjer's recursion is far more efficient. It applies when the frequency distribution belongs to the (a,b,0)(a, b, 0) class, meaning:

P(N=k)P(N=k1)=a+bk,k=1,2,3,\frac{P(N = k)}{P(N = k-1)} = a + \frac{b}{k}, \quad k = 1, 2, 3, \ldots

The Poisson, negative binomial, and binomial distributions all satisfy this condition. The recursive formula for the aggregate loss probabilities (with discrete severity) is:

fS(x)=11afX(0)y=1x(a+byx)fX(y)fS(xy)f_S(x) = \frac{1}{1 - a \cdot f_X(0)} \sum_{y=1}^{x} \left(a + \frac{by}{x}\right) f_X(y) \cdot f_S(x - y)

Starting from fS(0)f_S(0), you build up the entire distribution of SS iteratively. This is much faster than computing convolutions directly.

Aggregate Claims Examples

Example 1 (Compound Poisson): An auto insurer expects an average of λ=3\lambda = 3 claims per policy year. Claim sizes follow a lognormal distribution with mean $5,000 and standard deviation $2,000.

  • E[S]=3×5,000=$15,000E[S] = 3 \times 5{,}000 = \$15{,}000
  • To find Var(S)\text{Var}(S), you need E[X2]=Var(X)+(E[X])2=4,000,000+25,000,000=29,000,000E[X^2] = \text{Var}(X) + (E[X])^2 = 4{,}000{,}000 + 25{,}000{,}000 = 29{,}000{,}000
  • Var(S)=3×29,000,000=87,000,000\text{Var}(S) = 3 \times 29{,}000{,}000 = 87{,}000{,}000, so SD(S)$9,327\text{SD}(S) \approx \$9{,}327

Example 2 (Compound Negative Binomial): A health insurer observes E[N]=5E[N] = 5 and Var(N)=10\text{Var}(N) = 10 (overdispersion ratio of 2). Claim sizes follow a gamma distribution with mean $2,000 and standard deviation $1,000.

  • E[S]=5×2,000=$10,000E[S] = 5 \times 2{,}000 = \$10{,}000
  • Var(S)=5×1,000,000+10×4,000,000=5,000,000+40,000,000=45,000,000\text{Var}(S) = 5 \times 1{,}000{,}000 + 10 \times 4{,}000{,}000 = 5{,}000{,}000 + 40{,}000{,}000 = 45{,}000{,}000

Notice how the overdispersion in claim counts dramatically increases the variance of aggregate losses compared to what a Poisson model would give (which would yield Var(S)=5×5,000,000=25,000,000\text{Var}(S) = 5 \times 5{,}000{,}000 = 25{,}000{,}000).

Stop-Loss Reinsurance

Reinsurance allows an insurer to transfer part of its risk to another company (the reinsurer). Stop-loss reinsurance specifically covers aggregate losses that exceed a chosen threshold dd (the retention) over a defined period. The reinsurer pays max(Sd,0)\max(S - d, 0), and the insurer keeps everything up to dd.

Purpose of Reinsurance

Reinsurance serves several functions:

  • Volatility reduction — Smooths financial results by capping the insurer's worst-case losses
  • Capacity expansion — With less risk retained, the insurer can underwrite more business
  • Diversification — Transfers concentrated risk to reinsurers who pool it across many cedants and geographies
  • Capital management — Helps meet regulatory capital requirements (e.g., risk-based capital) and maintain credit ratings

Types of Reinsurance

The two broad categories are:

  • Treaty reinsurance — Covers an entire portfolio of risks under a long-term agreement. The reinsurer must accept all risks that fall within the treaty's terms.
  • Facultative reinsurance — Covers a specific individual risk or policy, negotiated case by case. Used for unusual or very large exposures.

Within each category, reinsurance can be structured as:

  • Proportional — The reinsurer takes a fixed share of premiums and losses. Includes quota share (fixed percentage of every policy) and surplus share (reinsurer covers amounts above a line retained by the insurer).
  • Non-proportional — The reinsurer pays only when losses exceed a threshold. Includes excess of loss (per-claim basis) and stop-loss (aggregate basis).
Models of aggregate losses, Probability distribution - wikidoc

Excess of Loss vs. Stop-Loss

These are both non-proportional, but they trigger differently:

FeatureExcess of LossStop-Loss
TriggerIndividual claim exceeds retention ddAggregate losses exceed retention dd
Reinsurer paysmax(Xid,0)\max(X_i - d, 0) per claimmax(Sd,0)\max(S - d, 0) for total losses
Common useProperty and casualtyHealth and life
Protects againstSingle large claimsAccumulation of many claims

Stop-loss reinsurance can be further divided into individual stop-loss (ISL), which caps losses on any single covered member, and aggregate stop-loss (ASL), which caps total losses across the entire group.

Determining Stop-Loss Premiums

The net stop-loss premium (also called the pure premium) is the expected value of the reinsurer's payout:

πSL=E[max(Sd,0)]=d(sd)fS(s)ds\pi_{SL} = E[\max(S - d, 0)] = \int_d^{\infty} (s - d) \cdot f_S(s) \, ds

This can also be written as:

πSL=E[S]E[min(S,d)]\pi_{SL} = E[S] - E[\min(S, d)]

where E[min(S,d)]=0dsfS(s)ds+dP(S>d)E[\min(S, d)] = \int_0^{d} s \cdot f_S(s) \, ds + d \cdot P(S > d).

To compute this in practice:

  1. Model the aggregate loss distribution SS using the appropriate compound model

  2. Calculate P(S>d)P(S > d) and E[max(Sd,0)]E[\max(S - d, 0)] from the distribution

  3. Add a risk margin (loading) to reflect the reinsurer's cost of capital and parameter uncertainty

  4. The gross stop-loss premium is πSL×(1+θ)\pi_{SL} \times (1 + \theta), where θ\theta is the loading factor

The loading factor varies by reinsurer and depends on the tail risk, the cedant's loss history, and market conditions.

Impact on Aggregate Losses

Stop-loss reinsurance transforms the insurer's retained loss from SS to min(S,d)\min(S, d). This has several effects on the distribution:

  • The right tail is truncated at dd. The insurer's maximum possible loss becomes dd.
  • The variance decreases because extreme outcomes are removed.
  • The mean retained loss drops from E[S]E[S] to E[min(S,d)]E[\min(S, d)].

Actuaries must account for this truncation when pricing the underlying insurance policies and when calculating reserves. Ignoring the reinsurance would overstate the insurer's risk; ignoring the reinsurance cost would understate expenses.

Advantages of Stop-Loss

  • Protects against catastrophic accumulations of losses that could threaten solvency
  • Reduces earnings volatility, making financial results more predictable
  • Frees up capital that can be deployed to write additional business
  • Helps satisfy regulatory solvency requirements

Disadvantages of Stop-Loss

  • Cost — Premiums can be substantial, especially for low retentions or volatile portfolios
  • Basis risk — The reinsurance terms may not perfectly align with the insurer's actual loss experience (e.g., different definitions of covered losses)
  • Moral hazard — With downside protection in place, insurers may relax underwriting discipline or take on riskier business
  • Reinsurer restrictions — Reinsurers may impose exclusions, caps, or strict underwriting guidelines that limit the insurer's flexibility

Optimal Stop-Loss Retention

Choosing the retention dd is a balancing act. A lower dd gives more protection but costs more in reinsurance premium. A higher dd is cheaper but leaves the insurer exposed to larger aggregate losses.

Actuaries determine the optimal retention by:

  1. Modeling the aggregate loss distribution under various retention levels
  2. Computing risk measures for the retained loss at each level, such as Value-at-Risk (VaR) at a chosen confidence level or Conditional Tail Expectation (CTE) (also called TVaR)
  3. Comparing the marginal reduction in risk against the marginal increase in reinsurance cost
  4. Selecting the retention that minimizes a chosen objective function (e.g., total cost of risk = retained losses + reinsurance premium + cost of capital)

The optimal dd depends on the insurer's risk appetite, available capital, the portfolio's loss characteristics, and the reinsurer's pricing. The reinsurer's financial strength and claims-paying reputation also matter, since the insurer is relying on the reinsurer to pay when losses are at their worst.