Fiveable

🔀Stochastic Processes Unit 7 Review

QR code for Stochastic Processes practice questions

7.1 Definition and properties of renewal processes

7.1 Definition and properties of renewal processes

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🔀Stochastic Processes
Unit & Topic Study Guides

Definition of renewal processes

A renewal process models events that occur over time where the gaps between consecutive events are independent and identically distributed (i.i.d.) random variables. This framework captures a simple but powerful idea: each time an event occurs, the process "renews" itself, and the future behaves statistically the same as it did from the start.

Renewal processes show up throughout applied probability. They're foundational in reliability theory (modeling equipment failures), queueing theory (modeling customer arrivals), and inventory management (modeling demand). They also serve as building blocks for more complex stochastic models, and the Poisson process is a special case where interarrival times are exponential.

Interarrival times

The interarrival times X1,X2,X3,X_1, X_2, X_3, \dots are the time intervals between consecutive events. These are assumed to be i.i.d. non-negative random variables with a common distribution function F(x)=P(Xx)F(x) = P(X \leq x). The choice of distribution for the XiX_i determines the character of the renewal process:

  • Exponential interarrival times produce a Poisson process
  • Gamma interarrival times arise when each "event" requires multiple exponential stages
  • Weibull interarrival times are common in reliability modeling, where failure rates increase or decrease over time

The mean interarrival time μ=E[X]\mu = E[X] plays a central role in nearly every renewal theorem.

Sequence of events (renewal epochs)

The renewal epochs S0,S1,S2,S_0, S_1, S_2, \dots mark when events occur:

  • S0=0S_0 = 0 (the process starts at time zero)
  • Sn=X1+X2++XnS_n = X_1 + X_2 + \cdots + X_n for n1n \geq 1

So SnS_n is a partial sum of i.i.d. random variables, which is why results from probability theory (like the law of large numbers) apply directly.

The counting process N(t)N(t) records how many events have occurred by time tt:

N(t)=max{n0:Snt}N(t) = \max\{n \geq 0 : S_n \leq t\}

Note the relationship: N(t)nN(t) \geq n if and only if SntS_n \leq t. This duality between N(t)N(t) and SnS_n is used constantly in proofs and calculations.

The i.i.d. assumption

The two parts of the i.i.d. assumption each do specific work:

  • Independence means the length of one interarrival time gives you no information about any other. Knowing that the last gap was unusually long doesn't change the distribution of the next gap.
  • Identically distributed means every interarrival time is drawn from the same distribution FF. The process doesn't age or drift.

Together, these assumptions are what make the process "renew" at each event. At every renewal epoch, the future of the process is a probabilistic copy of the original process. This is the structural property that drives all the major theorems.

Properties of renewal processes

Memoryless property and the Poisson process

The memoryless property is specific to the exponential distribution. If XExp(λ)X \sim \text{Exp}(\lambda), then for all s,t0s, t \geq 0:

P(X>s+tX>s)=P(X>t)P(X > s + t \mid X > s) = P(X > t)

This says that given you've already waited ss units without an event, the remaining wait time has the same distribution as if you'd just started. No other continuous distribution has this property.

When interarrival times are exponential, the renewal process becomes a Poisson process, which is the only renewal process that is also a continuous-time Markov chain. For general interarrival distributions, the process does not have the memoryless property, and the time since the last renewal carries information about the time until the next one.

Counting process

The counting process N(t)N(t) is a non-negative, integer-valued, non-decreasing random variable. Its key properties:

  • N(0)=0N(0) = 0
  • N(t)N(t) \to \infty as tt \to \infty (assuming P(X>0)>0P(X > 0) > 0 so events don't pile up)
  • N(t)N(t) has right-continuous sample paths with jumps of size 1

The full distribution of N(t)N(t) depends on the interarrival distribution FF, and in general there's no closed-form expression. That's why the renewal function and asymptotic theorems are so valuable.

Interarrival times, The Generalized Gamma Distribution - ReliaWiki

Renewal function

The renewal function M(t)=E[N(t)]M(t) = E[N(t)] gives the expected number of renewals by time tt. It can be expressed as:

M(t)=n=1Fn(t)M(t) = \sum_{n=1}^{\infty} F^{*n}(t)

where Fn(t)=P(Snt)F^{*n}(t) = P(S_n \leq t) is the nn-fold convolution of FF. Each term Fn(t)F^{*n}(t) is the probability that at least nn events have occurred by time tt.

The renewal function also satisfies the renewal equation:

M(t)=F(t)+0tM(tx)dF(x)M(t) = F(t) + \int_0^t M(t - x)\, dF(x)

The intuition here: F(t)F(t) accounts for the first renewal occurring by time tt, and the integral accounts for all subsequent renewals by conditioning on when the first renewal happens. This integral equation is a Volterra equation of the second kind and can sometimes be solved analytically (e.g., for exponential FF) or numerically.

Elementary renewal theorem

The elementary renewal theorem (ERT) gives the long-run renewal rate. If μ=E[X]<\mu = E[X] < \infty, then:

limtN(t)t=1μwith probability 1\lim_{t \to \infty} \frac{N(t)}{t} = \frac{1}{\mu} \quad \text{with probability 1}

and equivalently:

limtM(t)t=1μ\lim_{t \to \infty} \frac{M(t)}{t} = \frac{1}{\mu}

The first statement follows from the strong law of large numbers applied to Sn/nμS_n / n \to \mu, then inverting. The second is the expectation version.

Example: If a lightbulb has a mean lifetime of μ=500\mu = 500 hours, then in the long run you'll replace about 1/500=0.0021/500 = 0.002 bulbs per hour, or roughly 1 bulb every 500 hours.

Blackwell's theorem

Blackwell's theorem refines the ERT by describing the renewal function over finite intervals. For a non-lattice interarrival distribution (one that isn't concentrated on multiples of some fixed value dd):

limt[M(t+h)M(t)]=hμ\lim_{t \to \infty} [M(t + h) - M(t)] = \frac{h}{\mu}

for any fixed h>0h > 0. This says that asymptotically, the expected number of renewals in any interval of length hh is h/μh/\mu, regardless of where you place that interval.

For lattice distributions (supported on {d,2d,3d,}\{d, 2d, 3d, \dots\}), the analogous result is:

limnP(Sk=nd for some k)=dμ\lim_{n \to \infty} P(S_k = nd \text{ for some } k) = \frac{d}{\mu}

Blackwell's theorem is strictly stronger than the ERT because it controls the local behavior of M(t)M(t), not just the global average.

Key renewal theorem

The key renewal theorem is the most general asymptotic tool for renewal processes. Suppose g(t)g(t) is a non-negative, directly Riemann integrable function, and define:

Z(t)=0tg(tx)dM(x)Z(t) = \int_0^t g(t - x)\, dM(x)

Then for a non-lattice interarrival distribution:

limtZ(t)=1μ0g(t)dt\lim_{t \to \infty} Z(t) = \frac{1}{\mu} \int_0^{\infty} g(t)\, dt

This theorem converts a complicated time-dependent quantity into a simple ratio. It's the workhorse behind many steady-state results in queueing and reliability. The condition "directly Riemann integrable" is a technical requirement that's satisfied by most functions you'll encounter (bounded, non-negative, and decaying to zero is sufficient).

Excess life and age

Two related quantities describe where you are within a renewal interval at time tt:

  • Excess life (residual life) γ(t)=SN(t)+1t\gamma(t) = S_{N(t)+1} - t: the time from tt until the next renewal
  • Age (current life) δ(t)=tSN(t)\delta(t) = t - S_{N(t)}: the time from the last renewal up to tt

The spread (or total life) is their sum: γ(t)+δ(t)=XN(t)+1\gamma(t) + \delta(t) = X_{N(t)+1}, the length of the renewal interval containing tt.

A classic result from the key renewal theorem gives the limiting distribution of the excess life. As tt \to \infty:

P(γ(t)x)1μ0x[1F(y)]dyP(\gamma(t) \leq x) \to \frac{1}{\mu} \int_0^x [1 - F(y)]\, dy

This is the equilibrium distribution (or spread distribution). Notice it's biased toward longer interarrival times, which is an instance of the inspection paradox: if you arrive at a "random" time tt, you're more likely to land in a long interval than a short one.

Renewal-reward processes

A renewal-reward process pairs each renewal cycle with a reward RnR_n. The rewards R1,R2,R_1, R_2, \dots are i.i.d. (and can be correlated with the corresponding XnX_n, but the pairs (Xn,Rn)(X_n, R_n) are i.i.d.). The cumulative reward by time tt is:

Y(t)=n=1N(t)RnY(t) = \sum_{n=1}^{N(t)} R_n

The renewal-reward theorem states:

limtY(t)t=E[R]E[X]=E[R]μwith probability 1\lim_{t \to \infty} \frac{Y(t)}{t} = \frac{E[R]}{E[X]} = \frac{E[R]}{\mu} \quad \text{with probability 1}

This is extremely useful. The long-run average reward per unit time equals the expected reward per cycle divided by the expected cycle length. You don't need to track the process in detail; you just need the per-cycle expectations.

Example: A taxi driver earns a random fare RnR_n on each trip, and each trip takes a random time XnX_n. If the average fare is E[R]=$25E[R] = \$25 and the average trip duration is E[X]=0.5E[X] = 0.5 hours, the long-run earning rate is 25/0.5=$5025/0.5 = \$50 per hour.

Interarrival times, The Weibull Distribution - ReliaWiki

Alternating renewal processes

An alternating renewal process switches between two states, typically called "on" and "off." The on-durations X1,X2,X_1, X_2, \dots are i.i.d., the off-durations Y1,Y2,Y_1, Y_2, \dots are i.i.d., and the pairs (Xn,Yn)(X_n, Y_n) are independent across cycles.

Each cycle has length Xn+YnX_n + Y_n. By the renewal-reward theorem (treating "on" time as the reward), the long-run fraction of time spent in the "on" state is:

limtP(on at time t)=E[X]E[X]+E[Y]\lim_{t \to \infty} P(\text{on at time } t) = \frac{E[X]}{E[X] + E[Y]}

Example in reliability: A server has i.i.d. uptimes with mean 200 hours and i.i.d. repair times with mean 10 hours. Its long-run availability is 200/(200+10)0.952200/(200 + 10) \approx 0.952, so it's operational about 95.2% of the time.

Applications of renewal processes

Reliability theory

Renewal processes model the failure-repair cycle of repairable systems. The interarrival times represent times between failures, and each failure triggers a repair that "renews" the system.

  • Mean time between failures (MTBF): μ=E[X]\mu = E[X]
  • Availability (via alternating renewal): MTBF/(MTBF+MTTR)\text{MTBF} / (\text{MTBF} + \text{MTTR})
  • Long-run failure rate: 1/μ1/\mu failures per unit time (by the ERT)

Weibull-distributed interarrival times are standard here because the Weibull shape parameter captures increasing (wear-out) or decreasing (burn-in) failure rates.

Queueing theory

Customer arrivals to a service system are often modeled as a renewal process. The GI/G/1GI/G/1 queue, for instance, assumes i.i.d. interarrival times (the "GI" stands for "general independent"). The Poisson process is the special case that gives the M/G/1M/G/1 and M/M/1M/M/1 queues.

Renewal-reward arguments help compute long-run performance metrics like average queue length, server utilization, and throughput without solving for the full transient distribution.

Inventory models

Demand arrivals for a product can be modeled as a renewal process. Each demand epoch triggers a potential inventory decision.

Under a continuous-review (s,Q)(s, Q) policy (order quantity QQ whenever inventory hits level ss), the reorder cycles form a renewal process. Renewal-reward theory then gives the long-run average cost per unit time by computing the expected cost per cycle and dividing by the expected cycle length.

Maintenance and replacement policies

Age replacement policy: Replace a component when it either fails or reaches age TT, whichever comes first. The replacement epochs form a renewal process with interarrival times min(Xn,T)\min(X_n, T). Renewal-reward theory determines the optimal TT that minimizes the long-run cost rate:

Long-run cost rate=E[cost per cycle]E[min(X,T)]\text{Long-run cost rate} = \frac{E[\text{cost per cycle}]}{E[\min(X, T)]}

By differentiating with respect to TT, you find the replacement age that balances the cost of preventive replacement against the higher cost of unplanned failure.