Definition of stochastic processes
A stochastic process is a collection of random variables indexed by time (or sometimes space), used to model systems that evolve unpredictably. These processes give you a mathematical framework for analyzing random phenomena in fields like physics, biology, finance, and engineering.
Random variables over time
A stochastic process assigns a random variable to each point in time . Each random variable represents the system's state at that moment, and the set of all possible states is called the state space.
- Stock prices over time have a continuous state space (the price can be any positive real number).
- The number of customers in a queue has a discrete state space (0, 1, 2, 3, ...).
Probabilistic models
Stochastic processes assign probabilities to different outcomes or trajectories of the system.
- Probability distributions describe the likelihood of the system being in a particular state at a given time.
- Joint probability distributions capture dependencies between random variables at different time points. For instance, tomorrow's stock price isn't independent of today's.
- Transition probabilities specify the likelihood of moving from one state to another.
Dynamical systems with randomness
Stochastic processes incorporate randomness into how a system evolves. That randomness can come from inherent uncertainty, external noise, or unpredictable events. The future state depends on both the current state and random factors.
For continuous-time systems, stochastic differential equations (SDEs) are the standard modeling tool. These generalize ordinary differential equations by adding a noise term, typically driven by Brownian motion.
Classification by state space
The state space is the set of all possible values the random variables can take. Its structure determines which mathematical tools you'll use.
Discrete state space
Here, the random variables take on a countable number of distinct values (often integers).
- Number of defective items on a production line
- Number of customers waiting in a queue
- Discrete-time Markov chains are the most common process type in this category
Continuous state space
The random variables can take any value within a continuous range (typically subsets of ).
- Stock prices, temperature measurements, particle positions in a fluid
- Brownian motion and diffusion processes are classic examples
Finite vs infinite state space
- A finite state space has a fixed number of possible states. Markov chains on finite state spaces are easier to analyze and always have well-defined stationary distributions (under mild conditions).
- An infinite state space has unboundedly many possible states. Random walks on and Poisson processes are examples. These generally require more advanced techniques, such as generating functions or transform methods.
Classification by time index
Stochastic processes are also classified by whether time is treated as discrete or continuous.
Discrete-time processes
The time index takes integer values: . The system's state is observed at fixed intervals.
- Daily stock closing prices, monthly sales figures, annual population counts
- Common models: discrete-time Markov chains, autoregressive (AR) models
Continuous-time processes
The time index takes real values: . The system can change state at any instant.
- Particle motion, chemical reaction kinetics, high-frequency financial data
- Common models: Poisson processes, Brownian motion, solutions to SDEs
Classification by memory
How much of the process's history influences its future behavior is another key distinction.
Memoryless processes
The future state depends only on the present, not on how the system got there. The probability distribution of the next state is independent of the process's history.
- Poisson processes (memoryless inter-arrival times)
- Continuous-time Markov chains with exponential holding times
The exponential distribution is the only continuous distribution with the memoryless property: .
Processes with memory
The future state depends on both the current state and some or all past states.
- Autoregressive (AR) models use a fixed number of past values to predict the next.
- Moving average (MA) models depend on past random shocks.
- Hidden Markov models have an underlying Markov structure, but the observed process itself is not Markov.
Markov vs non-Markov
Markov processes satisfy the Markov property: the future depends on the past only through the present state.
This dramatically simplifies analysis because you only need to track the current state, not the full history.
Non-Markov processes have more complex dependence structures. Examples include long-memory processes and fractional Brownian motion, where correlations decay slowly and the entire history matters.
Examples of stochastic processes
Several fundamental stochastic processes serve as building blocks for more complex models.

Random walks
A random walk models an object taking random steps in some space. In the simplest version on , at each time step the position increases or decreases by 1 with equal probability:
Random walks appear in physics (as discrete approximations to Brownian motion), finance (simple models of price changes), and biology (animal foraging). Variations include biased random walks (unequal step probabilities), correlated random walks, and random walks with absorbing or reflecting barriers.
Poisson processes
A Poisson process counts the number of events occurring over time, where events happen independently at a constant average rate .
- The number of events in any interval of length follows a Poisson distribution:
- Events in disjoint time intervals are independent.
- Inter-arrival times are exponentially distributed with mean .
Applications: customer arrivals at a service counter, radioactive decay, website traffic.
Brownian motion
Brownian motion (the Wiener process) is a continuous-time, continuous-state process with three defining properties:
-
-
Increments are independent: is independent of for
-
Increments are normally distributed:
Its sample paths are continuous but extremely jagged (nowhere differentiable, almost surely). Brownian motion is central to stochastic calculus and financial modeling. Geometric Brownian motion, where , is the process underlying the Black-Scholes option pricing model.
Markov chains
Markov chains are discrete-time processes satisfying the Markov property. The state space can be discrete or continuous (the continuous case is often called a Markov process or Markov kernel).
- Transition probabilities govern movement between states and can be organized into a transition matrix.
- Applications: weather modeling, PageRank algorithm, MCMC methods in Bayesian statistics, queueing systems.
Stationarity of stochastic processes
Stationarity describes whether the statistical properties of a process stay constant over time. This matters because many analytical tools and theorems only apply to stationary processes.
Strict vs wide-sense stationarity
Strict (strong) stationarity means the entire joint distribution is invariant under time shifts:
for any time points and any shift .
Wide-sense (weak) stationarity is a less demanding condition requiring only:
- Constant mean: for all
- Covariance depends only on the lag:
Strict stationarity implies wide-sense stationarity (provided second moments exist), but the converse is not generally true. A Gaussian process is a notable exception: for Gaussian processes, wide-sense stationarity does imply strict stationarity, because the distribution is fully determined by the mean and covariance.
Stationary increments
A process has stationary increments if the distribution of depends only on the lag , not on the starting time . Both Brownian motion and Poisson processes have stationary increments.
A process with stationary increments is not necessarily stationary itself. For example, Brownian motion has stationary increments, but grows with time, so it's not stationary.
Ergodicity
Ergodicity is a stronger property than stationarity. An ergodic process is one where the time average of a single, sufficiently long realization converges to the ensemble average (the expected value across all possible realizations).
This is practically important: it means you can estimate statistical properties like the mean and variance from just one long observation of the process, rather than needing many independent realizations. Many stationary Markov chains (specifically, irreducible and aperiodic ones) are ergodic.
Sample paths of stochastic processes
A sample path (also called a realization or trajectory) is a single instance of the process over time. Think of it as one possible "story" the random system could tell.
Realizations and trajectories
Each realization represents one possible outcome. Formally, a sample path is a function of time for a fixed outcome : . Different realizations can look very different from each other, depending on the underlying probability distribution.
Continuity of sample paths
- Continuous sample paths have no jumps. Examples: Brownian motion, Ornstein-Uhlenbeck process.
- Discontinuous sample paths exhibit jumps. Examples: Poisson processes (which jump by 1 at each event), compound Poisson processes (which can jump by random amounts).
Whether paths are continuous or not affects which mathematical tools apply. For instance, Itô calculus is built for processes with continuous paths, while jump-diffusion models require extensions.
Differentiability of sample paths
Continuity does not guarantee differentiability. Brownian motion is the classic example: its paths are continuous everywhere but differentiable nowhere (almost surely). This is why you can't write in the ordinary sense, and it's a key reason stochastic calculus (Itô's lemma) is needed instead of standard calculus.
The Ornstein-Uhlenbeck process, by contrast, has sample paths that are continuously differentiable in a mean-square sense. Differentiability properties determine which calculus rules you can apply.
Filtrations and adapted processes
These concepts formalize the idea of "what information is available at each point in time." They're essential for martingale theory and stochastic calculus.
Information accumulation over time
A filtration is an increasing family of -algebras. Each represents all the information available up to time . "Increasing" means information is never lost:
The most common example is the natural filtration, generated by the process itself: . This contains exactly the information you'd have from observing the process up to time .
Adapted vs predictable processes
- A process is adapted to a filtration if is -measurable for every . In plain terms, you can determine the value of using only information available at time . Most processes you encounter (Brownian motion, Poisson processes, Itô processes) are adapted to their natural filtration.
- A process is predictable if is measurable with respect to , the information available strictly before time . Predictability is a stronger condition and becomes important when defining stochastic integrals.
Martingales
A martingale is an adapted process satisfying:
The interpretation: the best forecast of a martingale's future value, given everything you know now, is its current value. There's no systematic drift up or down.
- Brownian motion is a martingale.
- A compensated Poisson process is a martingale.
- In finance, discounted asset prices are martingales under the risk-neutral measure (this is the foundation of no-arbitrage pricing).
Martingales also come in two related flavors: a submartingale has (tendency to increase), and a supermartingale has (tendency to decrease).