Absorption and are key concepts in that shape their long-term behavior. Absorption occurs when a chain reaches a state it can't escape, while ergodicity ensures convergence to a unique regardless of the starting point.

These properties have wide-ranging applications. Absorption models processes that eventually end, like equipment failure or disease progression. Ergodicity is crucial for systems that reach equilibrium, such as chemical reactions or stable populations, and underpins algorithms like PageRank.

Definition of absorption

  • Absorption is a key concept in Markov chains where the process eventually reaches a state or set of states from which it cannot escape
  • Once a Markov chain enters an absorbing state, it remains there forever and the process effectively ends
  • The transition probabilities and structure of the Markov chain determine whether absorption occurs and how long it takes on average to reach an absorbing state

Absorbing states

Top images from around the web for Absorbing states
Top images from around the web for Absorbing states
  • An absorbing state is a state in a Markov chain that, once entered, cannot be left
  • Mathematically, an absorbing state ii has a transition probability of pii=1p_{ii} = 1 and pij=0p_{ij} = 0 for all jij \neq i
  • Examples of include:
    • In a game of Chutes and Ladders, the final square on the board is an absorbing state
    • In a model of equipment failure, a completely broken state from which no repair is possible is absorbing
  • A Markov chain is called absorbing if it has at least one absorbing state and it is possible to reach an absorbing state from any non-absorbing state

Transition probabilities

  • The transition probabilities of a Markov chain specify the likelihood of moving from one state to another in a single step
  • In an absorbing Markov chain, the transition probabilities can be arranged into a that separates absorbing and
  • The canonical form of the is:
I & 0\\ R & Q \end{bmatrix}$$ where: - $I$ is an identity matrix representing transitions between absorbing states (which are impossible) - $0$ is a zero matrix for transitions from absorbing to transient states - $R$ contains probabilities of moving from each transient state to each absorbing state - $Q$ is the matrix of transition probabilities between transient states ### Canonical form - The canonical form of the transition matrix clearly distinguishes the absorbing and transient states - Arranging the transition probabilities in canonical form allows for easier analysis of [absorption probabilities](https://www.fiveableKeyTerm:absorption_probabilities) and expected time until absorption - The structure of the canonical form also provides insights into the communication classes and [irreducibility](https://www.fiveableKeyTerm:Irreducibility) of the Markov chain - For example, if there are multiple absorbing states and the $R$ matrix has more than one non-zero column, the chain has multiple absorbing classes ## Absorption probabilities - Absorption probabilities quantify the likelihood of a Markov chain eventually being absorbed into each absorbing state, given a particular starting state - These probabilities can be computed using the [fundamental matrix](https://www.fiveableKeyTerm:fundamental_matrix) and the structure of the canonical form transition matrix - Understanding absorption probabilities helps analyze the long-term behavior of absorbing Markov chains and make predictions about the process ### Fundamental matrix - The fundamental matrix $N$ of an absorbing Markov chain is defined as: $$N = (I - Q)^{-1}$$ where $I$ is an identity matrix and $Q$ is the matrix of transition probabilities between transient states from the canonical form - Each entry $n_{ij}$ of the fundamental matrix represents the expected number of times the chain is in transient state $j$, given that it starts in transient state $i$, before being absorbed - The fundamental matrix captures important information about the behavior of the chain prior to absorption and is used in computing absorption probabilities and expected time until absorption ### Computations using fundamental matrix - Let $b_i$ be the probability of being absorbed into absorbing state $i$, given a particular starting distribution over the transient states - The absorption probabilities can be computed as: $$\vec{b} = N\vec{c}$$ where $\vec{c}$ is a column vector of the starting probabilities of being in each transient state - The entries of $\vec{b}$ give the absorption probabilities for each absorbing state - The fundamental matrix can also be used to calculate the [expected number of steps until absorption](https://www.fiveableKeyTerm:Expected_number_of_steps_until_absorption) for each starting state ### Expected number of steps until absorption - Let $t_i$ be the expected number of steps until absorption, given that the chain starts in transient state $i$ - The vector of expected absorption times $\vec{t}$ can be computed as: $$\vec{t} = N\vec{1}$$ where $\vec{1}$ is a column vector of all ones - Each entry of $\vec{t}$ represents the expected number of steps before the chain is absorbed, starting from the corresponding transient state - These expected absorption times provide valuable information about the duration of the process and how long it takes to reach an absorbing state on average - For example, in a model of a game or a machine repair process, the expected absorption times can help predict how many rounds are played or how long a machine operates before reaching a terminal state ## Ergodicity - Ergodicity is another important property of Markov chains that describes their long-term behavior and convergence to a stable distribution - A Markov chain is ergodic if it is both irreducible (it is possible to reach any state from any other state) and aperiodic (the chain does not get trapped in cycles) - Ergodic Markov chains have a unique stationary distribution that they converge to over time, regardless of the starting state ### Definition of ergodicity - A Markov chain is ergodic if it satisfies two conditions: 1. Irreducibility: For any two states $i$ and $j$, there is a positive probability of eventually reaching state $j$ from state $i$ (and vice versa) 2. [Aperiodicity](https://www.fiveableKeyTerm:Aperiodicity): The chain does not have a deterministic cycle that it gets trapped in - Ergodicity ensures that the long-term behavior of the chain is independent of the starting state and converges to a unique stationary distribution ### Ergodic Markov chains - In an ergodic Markov chain, the transition probabilities and structure of the chain allow for reaching any state from any other state over time - The chain mixes well and does not get stuck in subsets of states or deterministic cycles - Examples of ergodic Markov chains include: - A well-shuffled deck of cards, where any card can be reached from any other card after sufficient shuffling - A random walk on a connected, non-bipartite graph, where the walker can reach any node from any other node - Ergodic chains have important applications in modeling systems that exhibit long-term stability and predictability ### Stationary distributions - A stationary distribution $\vec{\pi}$ of a Markov chain is a probability distribution over the states that remains unchanged under the transition matrix $P$ - Mathematically, a distribution $\vec{\pi}$ is stationary if it satisfies: $$\vec{\pi} = \vec{\pi}P$$ - In other words, if the chain starts in the stationary distribution, it will remain in that distribution after each transition - For an ergodic Markov chain, the stationary distribution represents the long-term proportion of time the chain spends in each state, regardless of the starting state ### Uniqueness of stationary distribution - One of the key properties of ergodic Markov chains is the uniqueness of the stationary distribution - In an ergodic chain, there is only one distribution $\vec{\pi}$ that satisfies the stationary condition $\vec{\pi} = \vec{\pi}P$ - This unique stationary distribution is also the limiting distribution of the chain, meaning that as the number of steps tends to infinity, the distribution of the chain converges to $\vec{\pi}$, regardless of the initial state - The uniqueness of the stationary distribution in ergodic chains is crucial for making long-term predictions and understanding the stability of the modeled system - For example, in a model of a well-mixed chemical reaction, the unique stationary distribution would represent the equilibrium concentrations of the reactants and products ## Ergodicity vs absorption - Ergodicity and absorption are two distinct long-term behaviors that a Markov chain can exhibit - While ergodic chains converge to a unique stationary distribution, absorbing chains eventually get trapped in absorbing states - Understanding the differences between ergodicity and absorption is crucial for modeling and analyzing various systems and processes ### Differences in long-term behavior - Ergodic Markov chains: - Converge to a unique stationary distribution $\vec{\pi}$ over time - The long-term behavior is independent of the starting state - The chain continues to evolve and transition between states according to the stationary distribution - Absorbing Markov chains: - Eventually reach an absorbing state and remain there forever - The long-term behavior depends on the starting state and the absorption probabilities - Once an absorbing state is reached, the chain effectively stops evolving ### Irreducibility and aperiodicity - Irreducibility and aperiodicity are the two key properties that distinguish ergodic chains from absorbing chains - An ergodic chain must be both irreducible and aperiodic, ensuring that it can reach any state from any other state and does not get trapped in cycles - An absorbing chain, by definition, is not irreducible, as it has absorbing states that cannot be escaped once entered - The presence of absorbing states violates the irreducibility condition and prevents the chain from being ergodic ### Examples of ergodic and absorbing chains - Examples of ergodic Markov chains: - A random walk on a connected, non-bipartite graph - The long-term behavior of a well-shuffled deck of cards - The equilibrium distribution of molecules in a well-mixed gas - Examples of absorbing Markov chains: - A gambler's ruin problem, where the gambler either reaches a target wealth or goes broke (absorbing states) - A model of equipment failure, where the equipment either works or reaches a failed state that cannot be repaired - The stages of a disease progression, where the patient either recovers or reaches a terminal state ## Applications of absorption and ergodicity - The concepts of absorption and ergodicity in Markov chains have numerous applications across various fields, including [queueing theory](https://www.fiveableKeyTerm:Queueing_Theory), [population dynamics](https://www.fiveableKeyTerm:Population_Dynamics), and computer science - Understanding the long-term behavior of systems modeled as Markov chains allows for making predictions, optimizing processes, and designing effective algorithms ### Queueing theory - Queueing theory is the study of waiting lines and the analysis of congestion and delays in systems - Markov chains are often used to model queueing systems, where the states represent the number of customers or jobs in the system - Absorbing Markov chains can model situations where a queue eventually empties or reaches a maximum capacity (absorbing states) - Ergodic Markov chains can model stable queueing systems where the arrival and service rates balance out, leading to a steady-state distribution of queue lengths - Examples of queueing systems modeled using Markov chains include: - Call centers with a limited number of operators - Manufacturing systems with finite buffer capacities - Computer networks with packet queues at routers ### Population dynamics - Markov chains can be used to model the dynamics of populations over time, such as the growth or decline of species in an ecosystem - Absorbing Markov chains can represent situations where a population eventually becomes extinct (an absorbing state) or reaches a carrying capacity - Ergodic Markov chains can model stable populations with balanced birth and death rates, leading to an equilibrium distribution - Examples of population dynamics modeled using Markov chains include: - The spread of an infectious disease in a population - The genetic drift of alleles in a finite population - The dynamics of competing species in an ecosystem ### PageRank algorithm - The PageRank algorithm, used by search engines like Google, relies on the concepts of ergodicity and stationary distributions in Markov chains - Web pages are modeled as states in a Markov chain, with hyperlinks representing transitions between states - The ergodicity of the chain ensures the existence of a unique stationary distribution, which represents the long-term proportion of time a random web surfer spends on each page - The PageRank of a web page is determined by its stationary probability, reflecting its importance and centrality in the network of web pages - The PageRank algorithm has been successful in ranking web pages and providing relevant search results by leveraging the properties of ergodic Markov chains - Variations of the PageRank algorithm have also been applied to other domains, such as ranking scientific papers, social network users, and product recommendations

Key Terms to Review (22)

Absorbing states: Absorbing states are specific types of states in a Markov chain where, once the process enters that state, it cannot leave. This means that the probability of transitioning from an absorbing state to any other state is zero. Absorbing states are crucial in understanding the long-term behavior of Markov chains, as they represent final outcomes or terminal points within the stochastic process.
Absorption probabilities: Absorption probabilities measure the likelihood that a stochastic process will eventually reach an absorbing state, where once entered, the process cannot leave. These probabilities are crucial in understanding how systems behave in the long run, particularly in Markov chains where certain states can trap the process. The concept ties directly into broader ideas of ergodicity, which explores conditions under which long-term averages converge to expected values.
Almost Sure Convergence: Almost sure convergence is a type of convergence in probability theory where a sequence of random variables converges to a random variable with probability one. This concept is essential in analyzing stochastic processes, as it provides a strong form of convergence that ensures the limiting behavior of the sequence aligns with the underlying probability structure.
Aperiodicity: Aperiodicity refers to the property of a stochastic process where the system does not exhibit regular cycles or patterns in its state transitions. This means that the return times to a particular state can vary, allowing for greater flexibility in the long-term behavior of the process. Aperiodicity is crucial when analyzing stationary distributions, absorption probabilities, and ergodic behavior, as it ensures that the system can eventually explore all states over time without being trapped in a cycle.
Borel-Cantelli Lemma: The Borel-Cantelli Lemma is a fundamental result in probability theory that gives conditions under which a sequence of events occurs infinitely often. Specifically, it states that if the sum of the probabilities of a sequence of events is finite, then the probability that infinitely many of them occur is zero. This concept is crucial for understanding the behavior of random variables and their convergence properties.
Canonical form: Canonical form refers to a standard or simplified representation of a mathematical object that makes it easier to analyze its properties. In the context of stochastic processes, especially regarding absorption and ergodicity, canonical forms help in understanding the long-term behavior of Markov chains by simplifying the state space and identifying absorbing states.
Ergodic Theorem: The Ergodic Theorem states that, under certain conditions, the time averages of a dynamical system will converge to the ensemble averages when the system is observed over a long period. This concept is crucial as it connects statistical mechanics with the long-term behavior of a system, emphasizing that individual trajectories will eventually exhibit the same statistical properties as the entire ensemble, particularly in processes that are stationary and ergodic.
Ergodicity: Ergodicity refers to a property of a stochastic process where time averages converge to ensemble averages for almost all initial states. This means that, over a long period, the behavior of a system can be fully characterized by observing a single trajectory. This concept is significant because it helps in understanding the long-term behavior and statistical properties of different stochastic processes.
Expected number of steps until absorption: The expected number of steps until absorption refers to the average number of transitions or moves a stochastic process will take before reaching an absorbing state. This concept is crucial for understanding how long a process will take to settle into a stable state, highlighting the dynamics of states in a Markov chain where some states are absorbing and others are transient. The expected value provides insights into the behavior of the system and the likelihood of remaining in various states over time.
First Step Analysis: First step analysis is a method used in stochastic processes to evaluate the expected outcomes of a system by considering the initial state and the possible transitions to other states. This approach allows for the simplification of complex problems by breaking them down into manageable parts, focusing on the immediate decisions and their consequences. It plays a significant role in understanding absorption and ergodicity by providing insights into how a process behaves as it moves through its states over time.
Fundamental matrix: The fundamental matrix is a matrix that describes the expected number of times a Markov chain will be in a transient state before being absorbed in an absorbing state. It plays a critical role in analyzing absorption processes by providing insights into the behavior of transient states and their eventual transition to absorbing states.
Irreducibility: Irreducibility in the context of stochastic processes refers to a property of a Markov chain where it is possible to reach any state from any other state in a finite number of steps. This characteristic indicates that all states communicate with each other, forming a single communicating class. It is an important feature because it implies that long-term behavior and stationary distributions are defined across the entire state space, impacting concepts like absorption and ergodicity.
L1 convergence: l1 convergence refers to the convergence of a sequence of random variables in the sense of their expected absolute differences, specifically that the expected value of the absolute difference between the variables converges to zero. This concept is essential in understanding the behavior of sequences of random variables and is often used in discussions related to ergodicity, martingales, and their stopping theorems.
Markov Chains: Markov chains are mathematical systems that undergo transitions from one state to another within a finite or countable number of possible states, where the probability of moving to the next state depends only on the current state and not on the sequence of events that preceded it. This memoryless property is a fundamental characteristic, making them useful for modeling a variety of stochastic processes, particularly in analyzing long-term behavior, stationary distributions, and absorption states.
Null recurrent states: Null recurrent states are types of states in a Markov chain that, while being recurrent (meaning the process will return to them infinitely often), have an expected return time that is infinite. This means that although the system will eventually revisit these states, the time between returns can vary widely, leading to long gaps. They are crucial in understanding the long-term behavior of stochastic processes, particularly when discussing absorption and ergodicity.
Population Dynamics: Population dynamics refers to the study of how populations change over time, including factors that influence their size, density, and distribution. It examines various processes such as birth, death, immigration, and emigration, which can be modeled through mathematical frameworks. This concept is essential in understanding systems that evolve stochastically, influencing predictions about future population states and behaviors in various contexts.
Probability Generating Functions: Probability generating functions (PGFs) are mathematical tools used to encode the probability distribution of a discrete random variable. They provide a compact representation of the probabilities associated with different outcomes and are particularly useful in analyzing various properties of random variables, such as moments and distributions. In the context of stochastic processes, PGFs play a significant role in studying absorption and ergodicity, especially in systems that can transition between states over time.
Queueing Theory: Queueing theory is the mathematical study of waiting lines, which helps analyze and model the behavior of queues in various systems. It explores how entities arrive, wait, and are served, allowing us to understand complex processes such as customer service, network traffic, and manufacturing operations.
Recurrent States: Recurrent states are states in a stochastic process that, once entered, are guaranteed to be revisited infinitely often over time. This concept is crucial for understanding long-term behavior in Markov chains, particularly in relation to absorption and ergodicity, as recurrent states ensure that the process does not escape these states permanently, allowing for long-term stability and predictability.
Stationary distribution: A stationary distribution is a probability distribution that remains unchanged as the process evolves over time in a Markov chain. It describes the long-term behavior of the chain, where the probabilities of being in each state stabilize and do not vary as time progresses, connecting to key concepts like state space, transition probabilities, and ergodicity.
Transient states: Transient states in a Markov chain are states that, once left, may never be returned to. They are critical for understanding the behavior of Markov processes, particularly in relation to the long-term predictions and stability of a system. A transient state signifies that there is a non-zero probability that the process will never revisit that state, impacting the overall structure and dynamics of the Markov chain, especially in terms of absorption and ergodicity.
Transition matrix: A transition matrix is a square matrix used to describe the probabilities of moving from one state to another in a Markov chain. Each entry in the matrix represents the probability of transitioning from a particular state to another state, and the sum of each row equals one. This structure allows for the analysis of state changes and helps understand the long-term behavior of the system being studied.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.