Fiveable

📊Actuarial Mathematics Unit 2 Review

QR code for Actuarial Mathematics practice questions

2.2 Poisson processes and arrival times

2.2 Poisson processes and arrival times

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
📊Actuarial Mathematics
Unit & Topic Study Guides

Poisson processes model random events occurring over time or space, and they're one of the most important tools in actuarial mathematics. They give actuaries a rigorous framework for predicting claim frequencies, calculating premiums, and managing reserves. By understanding how events arrive and how long you wait between them, you can build models that drive real pricing and risk decisions.

Poisson process fundamentals

A Poisson process is a counting process that tracks how many events occur over a given interval of time (or space). It's built on a small set of assumptions that make the math tractable while still capturing the behavior of many real-world phenomena.

Definition of Poisson process

The process is defined by a single rate parameter λ\lambda, which represents the average number of events per unit time. If you observe the process over an interval of length tt, the number of events N(t)N(t) follows a Poisson distribution with mean λt\lambda t:

P(N(t)=k)=(λt)keλtk!,k=0,1,2,P(N(t) = k) = \frac{(\lambda t)^k e^{-\lambda t}}{k!}, \quad k = 0, 1, 2, \ldots

A key structural feature: the number of events in non-overlapping intervals are independent random variables. So what happens between time 0 and time 5 tells you nothing about what happens between time 5 and time 10.

Assumptions and properties

A counting process qualifies as a Poisson process when it satisfies three conditions:

  1. No simultaneous events. Events occur one at a time. The probability of two or more events in an infinitesimally small interval is essentially zero.
  2. Independent increments. The number of events in non-overlapping time intervals are independent of each other.
  3. Rate proportionality. The probability of exactly one event in a tiny interval of length hh is approximately λh\lambda h.

The process also has stationary increments: the distribution of the number of events in any interval depends only on the interval's length, not on where it sits on the time axis. An interval from t=3t = 3 to t=5t = 5 has the same distribution as one from t=100t = 100 to t=102t = 102.

Memoryless property

The memoryless property means the future of the process doesn't depend on its past. Formally:

P(N(t+s)N(s)=kN(s)=n)=P(N(t)=k)P(N(t+s) - N(s) = k \mid N(s) = n) = P(N(t) = k)

In practical terms, if you're waiting for the next insurance claim and 3 days have already passed, the expected additional waiting time is the same as it was at the start. The process doesn't "remember" how long it's been. This property flows directly from the independent and stationary increments assumptions.

Relationship to exponential distribution

The time between consecutive events (the inter-arrival time) in a Poisson process follows an exponential distribution with rate λ\lambda. Its density is:

f(t)=λeλt,t0f(t) = \lambda e^{-\lambda t}, \quad t \geq 0

This connection is fundamental. The Poisson process and the exponential distribution are two sides of the same coin: one describes the count of events in an interval, the other describes the waiting time between events. Many derivations in actuarial math exploit this duality.

Arrival times in Poisson processes

Understanding when events occur, not just how many occur, is central to actuarial modeling. Arrival time analysis lets you estimate expected waiting times between claims and forecast when future events are likely.

Inter-arrival times

Inter-arrival times are the gaps between consecutive events. In a homogeneous Poisson process with rate λ\lambda:

  • The inter-arrival times are independent and identically distributed (i.i.d.) exponential random variables with rate λ\lambda.
  • The mean inter-arrival time is 1/λ1/\lambda. If claims arrive at a rate of 5 per month, the average time between claims is 1/5=0.21/5 = 0.2 months, or about 6 days.
  • The variance of each inter-arrival time is 1/λ21/\lambda^2.

Exponential distribution of inter-arrival times

The full distributional details for an inter-arrival time TT:

  • PDF: f(t)=λeλtf(t) = \lambda e^{-\lambda t} for t0t \geq 0
  • CDF: F(t)=1eλtF(t) = 1 - e^{-\lambda t} for t0t \geq 0
  • Survival function: P(T>t)=eλtP(T > t) = e^{-\lambda t}

The memoryless property of the exponential distribution states that P(T>t+sT>s)=P(T>t)P(T > t + s \mid T > s) = P(T > t). If you've already waited ss units without an event, the remaining wait time has the same distribution as if you'd just started waiting. The exponential distribution is the only continuous distribution with this property.

Probability of arrivals in time intervals

The number of events in any interval of length tt follows the Poisson PMF:

P(N(t)=k)=(λt)keλtk!,k=0,1,2,P(N(t) = k) = \frac{(\lambda t)^k e^{-\lambda t}}{k!}, \quad k = 0, 1, 2, \ldots

  • Expected count: E[N(t)]=λtE[N(t)] = \lambda t
  • Variance: Var(N(t))=λt\text{Var}(N(t)) = \lambda t

The fact that the mean equals the variance is a signature property of the Poisson distribution. If you observe data where the variance significantly exceeds the mean (overdispersion), a standard Poisson process may not be the right model.

Conditional probabilities of arrivals

Because of independent increments, conditioning on the past doesn't change the distribution of future counts:

P(N(s+t)N(s)=kN(s)=n)=(λt)keλtk!P(N(s+t) - N(s) = k \mid N(s) = n) = \frac{(\lambda t)^k e^{-\lambda t}}{k!}

The number of events in (s,s+t](s, s+t] is independent of whatever happened in (0,s](0, s]. This simplifies conditional calculations enormously and is one reason Poisson processes are so analytically convenient.

Poisson process applications

Modeling rare events

Poisson processes are a natural fit for rare events: natural disasters, industrial accidents, extreme financial losses. The key requirements are met when events occur independently, one at a time, and at a roughly constant rate. For example, if a region experiences an average of 2.3 significant earthquakes per year, you can model the count of earthquakes in any time window using a Poisson distribution with λ=2.3\lambda = 2.3 per year.

Insurance claims modeling

This is the bread-and-butter actuarial application. The standard approach separates frequency from severity:

  • Frequency: The number of claims per period is modeled as a Poisson random variable. The rate λ\lambda is estimated from historical claims data.
  • Severity: The size of each individual claim is modeled separately, often using lognormal, Pareto, or gamma distributions.

For example, a large auto insurer might observe an average of 850 claims per month across a portfolio. The monthly claim count would be modeled as NPoisson(850)N \sim \text{Poisson}(850), with individual claim amounts modeled independently.

Queueing theory applications

Poisson arrivals are a foundational assumption in queueing theory. The classic M/M/1 queue assumes Poisson arrivals (rate λ\lambda) and exponential service times (rate μ\mu). From these assumptions, you can derive:

  • Average number of customers in the system: λ/(μλ)\lambda / (\mu - \lambda)
  • Average waiting time: 1/(μλ)1 / (\mu - \lambda)

These results apply to call centers, hospital emergency departments, and any system where customers arrive randomly and wait for service.

Definition of Poisson process, Poisson distribution - Wikipedia

Reliability engineering

When modeling component failures, the time between failures is often assumed to follow an exponential distribution, which corresponds to a Poisson process for the failure count. The mean time between failures (MTBF) is 1/λ1/\lambda. If a machine fails on average once every 500 hours, λ=1/500\lambda = 1/500 per hour, and you can calculate the probability of surviving any given operating period using the exponential survival function.

Poisson process variations

The standard homogeneous Poisson process is elegant but restrictive. Several generalizations relax its assumptions to handle more realistic scenarios.

Non-homogeneous Poisson processes

A non-homogeneous Poisson process (NHPP) allows the rate to vary over time through an intensity function λ(t)\lambda(t). The expected number of events in the interval (a,b](a, b] is:

E[N(b)N(a)]=abλ(t)dtE[N(b) - N(a)] = \int_a^b \lambda(t) \, dt

NHPPs are useful when event rates change predictably. A retail store might see customer arrivals at a rate of 20 per hour during peak times and 5 per hour during off-peak times. The process still has independent increments, but the increments are no longer stationary.

Compound Poisson processes

A compound Poisson process attaches a random "size" to each event. If N(t)N(t) is a Poisson process and X1,X2,X_1, X_2, \ldots are i.i.d. random variables representing event sizes, the aggregate process is:

S(t)=i=1N(t)XiS(t) = \sum_{i=1}^{N(t)} X_i

This is the standard model for aggregate claims in insurance. The number of claims follows a Poisson process, and each claim has a random dollar amount. The mean and variance of S(t)S(t) are:

  • E[S(t)]=λtE[X]E[S(t)] = \lambda t \cdot E[X]
  • Var(S(t))=λtE[X2]\text{Var}(S(t)) = \lambda t \cdot E[X^2]

Mixed Poisson processes

In a mixed Poisson process, the rate λ\lambda is itself a random variable drawn from some mixing distribution. This captures heterogeneity across a population. For auto insurance, different drivers have genuinely different risk levels. If λ\lambda follows a gamma distribution, the resulting marginal distribution of claim counts is negative binomial, which naturally accommodates overdispersion.

Doubly stochastic Poisson processes

Also called Cox processes, these allow λ(t)\lambda(t) to be a stochastic process rather than a deterministic function or a single random variable. The event process is Poisson conditional on the realized path of λ(t)\lambda(t). Cox processes are used in credit risk modeling, where default intensities fluctuate with economic conditions, and in seismology, where earthquake rates vary unpredictably.

Estimating Poisson process parameters

Maximum likelihood estimation

For a homogeneous Poisson process, the MLE of λ\lambda is straightforward:

λ^=total number of eventstotal observation time\hat{\lambda} = \frac{\text{total number of events}}{\text{total observation time}}

If you observe 47 claims over 12 months, λ^=47/123.92\hat{\lambda} = 47/12 \approx 3.92 claims per month. MLE is asymptotically unbiased and achieves the lowest possible variance among consistent estimators (it's asymptotically efficient).

Method of moments

The method of moments equates sample moments to theoretical moments and solves for the parameters. For a Poisson process, since both the mean and variance equal λt\lambda t, the estimator is the sample mean of event counts per unit time. This gives the same result as MLE for the homogeneous case, but method of moments can be less efficient for more complex models or small samples.

Bayesian estimation

Bayesian estimation combines a prior distribution for λ\lambda with observed data to produce a posterior distribution. A common choice is a gamma prior, because it's conjugate to the Poisson likelihood. If the prior is Gamma(α,β)\text{Gamma}(\alpha, \beta) and you observe nn events in time TT, the posterior is:

λdataGamma(α+n,β+T)\lambda \mid \text{data} \sim \text{Gamma}(\alpha + n, \beta + T)

The posterior mean is (α+n)/(β+T)(\alpha + n) / (\beta + T), which is a weighted average of the prior mean and the MLE. Bayesian estimation is especially valuable when data is sparse and you have credible prior information.

Confidence intervals for parameters

For large samples, an approximate (1α)(1 - \alpha) confidence interval for λ\lambda uses the normal approximation:

λ^±zα/2λ^T\hat{\lambda} \pm z_{\alpha/2} \sqrt{\frac{\hat{\lambda}}{T}}

where TT is the total observation time. For exact intervals, you can use the relationship between the Poisson distribution and the chi-square distribution. If nn events are observed, an exact 100(1α)%100(1-\alpha)\% confidence interval for λT\lambda T is:

(12χ2n,α/22,12χ2(n+1),1α/22)\left(\frac{1}{2}\chi^2_{2n, \alpha/2}, \quad \frac{1}{2}\chi^2_{2(n+1), 1-\alpha/2}\right)

Divide by TT to get the interval for λ\lambda.

Poisson process simulation

Simulation is essential for studying process behavior, testing statistical methods, and generating scenarios for risk analysis.

Generating Poisson random variables

To generate a Poisson random variable with mean μ=λt\mu = \lambda t:

  1. Inverse transform method: Generate uniform random numbers and use the Poisson CDF to map them to counts. Accumulate probabilities P(N=0),P(N=1),P(N = 0), P(N = 1), \ldots until the cumulative probability exceeds the uniform draw.
  2. For large μ\mu: Use the normal approximation Nround(μ+μZ)N \approx \text{round}(\mu + \sqrt{\mu} \cdot Z) where ZN(0,1)Z \sim N(0,1), or use more sophisticated algorithms like the one by Ahrens and Dieter.
Definition of Poisson process, Poisson distribution - Wikipedia

Simulating arrival times

The most direct way to simulate a Poisson process is through its inter-arrival times:

  1. Generate i.i.d. exponential random variables T1,T2,T3,T_1, T_2, T_3, \ldots with rate λ\lambda. Each can be generated as Ti=ln(Ui)/λT_i = -\ln(U_i)/\lambda where UiUniform(0,1)U_i \sim \text{Uniform}(0,1).
  2. Compute arrival times as cumulative sums: Sn=T1+T2++TnS_n = T_1 + T_2 + \cdots + T_n.
  3. Stop when SnS_n exceeds the desired time horizon.

For non-homogeneous processes, the thinning algorithm is commonly used:

  1. Find an upper bound λλ(t)\lambda^* \geq \lambda(t) for all tt.
  2. Simulate a homogeneous Poisson process with rate λ\lambda^*.
  3. Accept each event at time tt with probability λ(t)/λ\lambda(t)/\lambda^*; otherwise discard it.

Monte Carlo methods

Monte Carlo estimation works by simulating many independent realizations of the process and averaging the results. To estimate the probability that total claims exceed a threshold:

  1. Simulate NN realizations of the claim process (frequency and severity).
  2. For each realization, compute the aggregate loss.
  3. The proportion of realizations exceeding the threshold estimates the probability.

Accuracy improves with more simulations, with the standard error decreasing proportionally to 1/N1/\sqrt{N}.

Variance reduction techniques

Standard Monte Carlo can require a very large number of simulations for precise estimates. Variance reduction techniques improve efficiency:

  • Antithetic variates: If UU generates one realization, use 1U1 - U to generate a second. The negative correlation between the pair reduces overall variance.
  • Control variates: Use a related quantity with a known expected value to adjust estimates.
  • Importance sampling: Oversample from regions of the distribution that matter most (e.g., the tail) and reweight accordingly.
  • Stratified sampling: Divide the probability space into strata and sample from each, ensuring better coverage.

Advanced topics in Poisson processes

Superposition of Poisson processes

If you combine (superpose) independent Poisson processes with rates λ1,λ2,,λn\lambda_1, \lambda_2, \ldots, \lambda_n, the result is a Poisson process with rate:

λ=λ1+λ2++λn\lambda = \lambda_1 + \lambda_2 + \cdots + \lambda_n

This is useful when events come from multiple independent sources. An insurance company receiving claims from three independent product lines with rates 10, 15, and 8 per month can model the combined stream as a single Poisson process with rate 33 per month.

Thinning of Poisson processes

Thinning is the reverse of superposition. Starting from a Poisson process with rate λ\lambda, independently keep each event with probability pp and discard it with probability 1p1 - p. The kept events form a Poisson process with rate pλp\lambda, and the discarded events form an independent Poisson process with rate (1p)λ(1-p)\lambda.

For example, if defective items occur at rate λ\lambda and each defective item is caught by inspection with probability p=0.9p = 0.9, the detected defects follow a Poisson process with rate 0.9λ0.9\lambda.

Poisson process transformations

Poisson processes can be transformed in several ways:

  • Time scaling: Replacing tt with ctct in a Poisson process with rate λ\lambda yields a Poisson process with rate cλc\lambda. This is useful for converting between time units.
  • Random time change: Substituting a random process for the deterministic time variable creates a new process. If N(t)N(t) is Poisson and you replace tt with a subordinator (a non-decreasing random process), the result can model systems with random operating times.

Marked Poisson processes

A marked Poisson process associates a random mark MiM_i with each event. The marks can be continuous (claim amounts), discrete (claim types), or even multivariate (location and severity together). The event times and marks are typically assumed independent.

Marked Poisson processes generalize compound Poisson processes and are the natural framework for modeling heterogeneous event streams. In catastrophe modeling, each event might carry marks for location, magnitude, and insured loss.

Poisson processes in actuarial applications

Pricing insurance contracts

The standard actuarial pricing framework uses the frequency-severity approach:

  1. Model claim frequency with a Poisson process (rate λ\lambda).
  2. Model claim severity with a separate distribution (lognormal, Pareto, gamma, etc.).
  3. Compute the expected aggregate loss: E[S]=λE[X]E[S] = \lambda \cdot E[X] per unit time.
  4. Add loadings for expenses, profit margin, and risk to arrive at the premium.

For a motor insurance portfolio with λ=0.15\lambda = 0.15 claims per policy per year and average claim size of $4,200, the pure premium per policy is 0.15×4,200=$6300.15 \times 4{,}200 = \$630 per year.

Ruin theory and Poisson processes

The Cramér-Lundberg model is the classical ruin theory framework. An insurer starts with initial surplus uu, collects premiums at rate cc per unit time, and pays claims that arrive as a compound Poisson process. The surplus at time tt is:

U(t)=u+ctS(t)U(t) = u + ct - S(t)

where S(t)=i=1N(t)XiS(t) = \sum_{i=1}^{N(t)} X_i is the aggregate claims process. Ruin occurs if U(t)<0U(t) < 0 for any t>0t > 0. The probability of ruin depends on the initial surplus, the premium rate, the claim frequency λ\lambda, and the claim size distribution. For exponentially distributed claims with mean 1/μ1/\mu, the ruin probability has a closed-form solution:

ψ(u)=λμce(μλ/c)u\psi(u) = \frac{\lambda}{\mu c} e^{-(\mu - \lambda/c)u}

provided c>λ/μc > \lambda/\mu (premiums exceed expected claims).

Reinsurance modeling

Reinsurance transfers part of the risk from a primary insurer (cedent) to a reinsurer. Poisson process models help evaluate different treaty structures:

  • Quota share: The reinsurer takes a fixed proportion α\alpha of every claim. The cedent's retained process is a compound Poisson process with the same frequency but scaled severity (1α)Xi(1-\alpha)X_i.
  • Excess-of-loss: The reinsurer pays the portion of each claim exceeding a retention dd. The reinsurer's process involves only claims where Xi>dX_i > d, which by thinning is itself a Poisson process.
  • Stop-loss: The reinsurer covers aggregate losses exceeding a threshold. This requires the full aggregate loss distribution, typically computed via the compound Poisson model.

Risk management with Poisson processes

Actuaries use Poisson-based aggregate loss models to compute key risk measures:

  • Value-at-Risk (VaR): The loss amount at a specified quantile (e.g., 99.5%). If the aggregate loss distribution gives P(Sx)=0.995P(S \leq x) = 0.995, then VaR99.5%=x\text{VaR}_{99.5\%} = x.
  • Tail Value-at-Risk (TVaR): The expected loss given that the loss exceeds VaR. TVaR captures the severity of tail events, not just their threshold.

These measures drive capital requirements under regulatory frameworks like Solvency II. The aggregate loss distribution S(t)S(t) is typically computed using Panjer's recursion, FFT methods, or Monte Carlo simulation from the underlying compound Poisson model.