Fiveable

๐Ÿ”€Stochastic Processes Unit 10 Review

QR code for Stochastic Processes practice questions

10.2 Martingale stopping theorems

10.2 Martingale stopping theorems

Written by the Fiveable Content Team โ€ข Last updated August 2025
Written by the Fiveable Content Team โ€ข Last updated August 2025
๐Ÿ”€Stochastic Processes
Unit & Topic Study Guides

Definition of martingales

A martingale is a stochastic process that models a "fair game": the best prediction of any future value, given everything you know now, is simply the current value. This concept is central to probability theory and shows up constantly in finance, gambling theory, and statistical inference.

Formally, a discrete-time stochastic process {Xn,nโ‰ฅ0}\{X_n, n \geq 0\} is a martingale with respect to a filtration {Fn,nโ‰ฅ0}\{\mathcal{F}_n, n \geq 0\} if all three conditions hold:

  • Adapted: XnX_n is Fn\mathcal{F}_n-measurable for all nโ‰ฅ0n \geq 0
  • Integrable: E[โˆฃXnโˆฃ]<โˆž\mathbb{E}[|X_n|] < \infty for all nโ‰ฅ0n \geq 0
  • Martingale property: E[Xn+1โˆฃFn]=Xn\mathbb{E}[X_{n+1} \mid \mathcal{F}_n] = X_n for all nโ‰ฅ0n \geq 0

The third condition is the defining one. It says that no matter what information you've accumulated, the conditional expectation of the next value equals the current value.

Stopping times

Definition of stopping times

A stopping time is a random variable ฯ„\tau representing the time at which you decide to stop observing (or acting on) a stochastic process. The key constraint is that your decision to stop can only use information available up to that moment.

Formally, ฯ„\tau is a stopping time with respect to a filtration {Fn,nโ‰ฅ0}\{\mathcal{F}_n, n \geq 0\} if the event {ฯ„โ‰คn}\{\tau \leq n\} is Fn\mathcal{F}_n-measurable for all nโ‰ฅ0n \geq 0.

Why does this matter? It rules out "clairvoyant" strategies. You can't decide to stop at time nn based on what happens at time n+1n+1. The decision must depend only on what you've seen so far.

Finite vs infinite stopping times

A stopping time ฯ„\tau is finite (or almost surely finite) if P(ฯ„<โˆž)=1\mathbb{P}(\tau < \infty) = 1, meaning the process will eventually stop with probability 1. Examples include:

  • The first time a simple random walk hits level +5+5, starting from 0 (this is a.s. finite for a symmetric walk on Z\mathbb{Z})
  • The time a gambler's fortune first reaches a predetermined profit target or ruin level

A stopping time is infinite if P(ฯ„=โˆž)>0\mathbb{P}(\tau = \infty) > 0. For instance, the first time a random walk on Z3\mathbb{Z}^3 returns to the origin has positive probability of never occurring (since the 3D symmetric random walk is transient). The distinction between finite and infinite stopping times is critical because most stopping theorems require ฯ„<โˆž\tau < \infty a.s. as a minimum condition.

Martingale convergence theorem

Conditions for convergence

The martingale convergence theorem tells you when a martingale settles down to a well-defined limit as nโ†’โˆžn \to \infty.

Doob's Forward Convergence Theorem: If {Xn}\{X_n\} is a martingale (or supermartingale) satisfying

supโกnโ‰ฅ0E[โˆฃXnโˆฃ]<โˆž\sup_{n \geq 0} \mathbb{E}[|X_n|] < \infty

then there exists a random variable XโˆžX_\infty with E[โˆฃXโˆžโˆฃ]<โˆž\mathbb{E}[|X_\infty|] < \infty such that Xnโ†’XโˆžX_n \to X_\infty almost surely.

A sufficient (stronger) condition is that the martingale is uniformly bounded: there exists a constant MM with โˆฃXnโˆฃโ‰คM|X_n| \leq M for all nn. Uniform boundedness implies the L1L^1-boundedness condition above.

One subtlety worth noting: almost sure convergence does not automatically give you E[Xโˆž]=E[X0]\mathbb{E}[X_\infty] = \mathbb{E}[X_0]. For that, you need the stronger condition of uniform integrability, which guarantees convergence in L1L^1 as well.

Proof of convergence theorem

The proof rests on Doob's upcrossing inequality. Here's the core argument in outline:

  1. Fix an interval [a,b][a, b] with a<ba < b. Define Un([a,b])U_n([a,b]) as the number of times the sequence X0,X1,โ€ฆ,XnX_0, X_1, \ldots, X_n crosses upward from below aa to above bb.

  2. Doob's upcrossing inequality bounds this: E[Un([a,b])]โ‰คE[(Xnโˆ’a)โˆ’]bโˆ’a\mathbb{E}[U_n([a,b])] \leq \frac{\mathbb{E}[(X_n - a)^-]}{b - a}. For an L1L^1-bounded martingale, the right side stays bounded as nโ†’โˆžn \to \infty.

  3. By the monotone convergence theorem, E[Uโˆž([a,b])]<โˆž\mathbb{E}[U_\infty([a,b])] < \infty, so Uโˆž([a,b])<โˆžU_\infty([a,b]) < \infty a.s.

  4. Since the number of upcrossings is finite a.s. for every rational pair a<ba < b, the sequence XnX_n cannot oscillate, and therefore it must converge a.s.

Optional stopping theorem

Statement of theorem

The optional stopping theorem (sometimes called Doob's optional stopping theorem) extends the martingale property from deterministic times to random stopping times.

Stopped process result: If {Xn}\{X_n\} is a martingale and ฯ„\tau is a stopping time, then the stopped process {Xnโˆงฯ„}\{X_{n \wedge \tau}\} (where nโˆงฯ„=minโก(n,ฯ„)n \wedge \tau = \min(n, \tau)) is also a martingale.

This is always true and requires no extra conditions on ฯ„\tau. The deeper question is whether you can "pass to the limit" and conclude:

E[Xฯ„]=E[X0]\mathbb{E}[X_\tau] = \mathbb{E}[X_0]

That equality requires additional conditions on ฯ„\tau.

Conditions for optional stopping

For E[Xฯ„]=E[X0]\mathbb{E}[X_\tau] = \mathbb{E}[X_0] to hold, you need at least one of the following:

  • Bounded stopping time: There exists N<โˆžN < \infty such that ฯ„โ‰คN\tau \leq N a.s. This is the simplest and most commonly used condition.
  • A.s. finite stopping time + uniform integrability: ฯ„<โˆž\tau < \infty a.s. and the family {Xnโˆงฯ„:nโ‰ฅ0}\{X_{n \wedge \tau} : n \geq 0\} is uniformly integrable.
  • Dominated convergence condition: ฯ„<โˆž\tau < \infty a.s. and there exists an integrable random variable YY with โˆฃXnโˆงฯ„โˆฃโ‰คY|X_{n \wedge \tau}| \leq Y for all nn.

Without these conditions, the theorem can fail. The classic counterexample is the doubling strategy in a fair coin-toss game: a gambler who doubles their bet after each loss has a stopping time ฯ„\tau (first win) that is a.s. finite, yet E[Xฯ„]=1โ‰ 0=E[X0]\mathbb{E}[X_\tau] = 1 \neq 0 = \mathbb{E}[X_0] because the stopped martingale is not uniformly integrable.

Definition of stopping times, Stochastic process - Wikipedia

Proof of optional stopping theorem

For the bounded case (ฯ„โ‰คN\tau \leq N a.s.), the proof is clean:

  1. Since {Xnโˆงฯ„}\{X_{n \wedge \tau}\} is a martingale, you have E[Xnโˆงฯ„]=E[X0]\mathbb{E}[X_{n \wedge \tau}] = \mathbb{E}[X_0] for every nn (just telescope the conditional expectations).
  2. Take n=Nn = N. Since ฯ„โ‰คN\tau \leq N, you get nโˆงฯ„=ฯ„n \wedge \tau = \tau, so E[Xฯ„]=E[X0]\mathbb{E}[X_\tau] = \mathbb{E}[X_0].

For the unbounded case with uniform integrability, the argument uses the fact that Xnโˆงฯ„โ†’Xฯ„X_{n \wedge \tau} \to X_\tau a.s. as nโ†’โˆžn \to \infty, and uniform integrability upgrades this to L1L^1 convergence, giving E[Xฯ„]=limโกnโ†’โˆžE[Xnโˆงฯ„]=E[X0]\mathbb{E}[X_\tau] = \lim_{n \to \infty} \mathbb{E}[X_{n \wedge \tau}] = \mathbb{E}[X_0].

Applications of martingale stopping

Gambling and fair games

In a fair game (modeled as a martingale), the optional stopping theorem tells you that no stopping strategy can give you a positive expected profit. If XnX_n is your fortune at time nn in a fair game, then E[Xฯ„]=E[X0]\mathbb{E}[X_\tau] = \mathbb{E}[X_0] for any bounded stopping time ฯ„\tau.

A concrete application: consider a symmetric random walk starting at xx, with absorbing barriers at 00 and NN. Since the walk is a martingale, optional stopping gives x=E[Xฯ„]=Nโ‹…P(hitย N)+0โ‹…P(hitย 0)x = \mathbb{E}[X_\tau] = N \cdot \mathbb{P}(\text{hit } N) + 0 \cdot \mathbb{P}(\text{hit } 0), so the probability of reaching NN before ruin is x/Nx/N.

Wald's equation

Wald's equation connects the expected value of a random sum to the expected number of terms.

Let {Xn,nโ‰ฅ1}\{X_n, n \geq 1\} be i.i.d. random variables with finite mean ฮผ\mu, and let ฯ„\tau be a stopping time (with respect to the natural filtration of the XnX_n) satisfying E[ฯ„]<โˆž\mathbb{E}[\tau] < \infty. Then:

E[โˆ‘n=1ฯ„Xn]=ฮผโ‹…E[ฯ„]\mathbb{E}\left[\sum_{n=1}^{\tau} X_n\right] = \mu \cdot \mathbb{E}[\tau]

The martingale route to proving this: define Sn=โˆ‘k=1nXkS_n = \sum_{k=1}^n X_k. Then Mn=Snโˆ’nฮผM_n = S_n - n\mu is a martingale. Applying the optional stopping theorem (after verifying the necessary conditions) gives E[Mฯ„]=0\mathbb{E}[M_\tau] = 0, which rearranges to Wald's equation.

Sequential analysis and hypothesis testing

Martingale stopping theorems underpin sequential testing procedures, where data arrive one observation at a time and you want to reach a decision as quickly as possible while controlling error rates.

The Sequential Probability Ratio Test (SPRT), developed by Wald, is the prime example. The likelihood ratio process forms a martingale (under the null hypothesis), and the stopping boundaries are chosen so that the type I and type II error probabilities stay below specified thresholds. The optional stopping theorem is what guarantees the error control works, and Wald's equation helps compute the expected sample size.

Relationship to other concepts

Martingales vs supermartingales

A supermartingale replaces the equality in the martingale property with an inequality:

E[Xn+1โˆฃFn]โ‰คXn\mathbb{E}[X_{n+1} \mid \mathcal{F}_n] \leq X_n

This models an "unfavorable game" where the process tends to decrease over time. Similarly, a submartingale satisfies E[Xn+1โˆฃFn]โ‰ฅXn\mathbb{E}[X_{n+1} \mid \mathcal{F}_n] \geq X_n.

Both the convergence theorem and the optional stopping theorem have supermartingale analogues. For supermartingales, optional stopping gives the inequality E[Xฯ„]โ‰คE[X0]\mathbb{E}[X_\tau] \leq \mathbb{E}[X_0] (under appropriate conditions), which is useful for bounding probabilities and expected values.

Stopping times vs Markov times

In older literature, you'll sometimes see the term Markov time used as a synonym for stopping time. In some treatments, a Markov time carries the additional connotation that the process being stopped is Markov, so the stopping decision depends only on the current state rather than the full history.

For Markov chains, the first hitting time of a particular state or set of states is a natural example. The strong Markov property then guarantees that the process "restarts" at the hitting time, which is a powerful tool for analyzing recurrence and transience.

Martingale stopping vs optimal stopping

Optimal stopping problems ask: when should you stop to maximize (or minimize) an expected payoff? This is a decision problem, whereas martingale stopping theorems are structural results about stopped processes.

Classic examples of optimal stopping include:

  • The secretary problem: You interview candidates sequentially and must hire one, with no callbacks. The optimal strategy rejects the first โ‰ˆN/e\approx N/e candidates, then hires the next one who's the best seen so far.
  • American option pricing: The holder of an American option chooses when to exercise. The option price equals the value of an optimal stopping problem, and the Snell envelope (the smallest supermartingale dominating the payoff process) characterizes the solution.

Martingale methods, especially the optional stopping theorem and supermartingale inequalities, are the primary tools for solving these problems.