MARKOV CHAIN
Stochastic process X(t) is a Markov Chain if
1] the space of possible values of X(t) is a finite or countable (discrete) set,
2] given the present, the future is independent of the past: for any moments of time t1 < ... < tn-1 < tn and values x1, ... xn-1, xn,
P(X(tn) = xn | X(t1) = x1, ... , X(tn-1) = xn-1) = P(X(tn) = xn | X(tn-1) = xn-1).
The possible values of X(t) are also called states. A Markov chain can be defined on a discrete set of time points (discrete time Markov chain) or time intervals (continuous time Markov chain). A Markov chain is
- irreducible if it can get from any state to any other state in one or more steps;
- aperiodic if 1 is the maximum common denominator of all possible numbers of steps in which the chain can return to the same state;
- positive recurrent if the expected amount of time it takes to return to any state is finite;
- finite if the number of states is finite;
- infinite if the number of states is infinite;
- ergodic if it is aperiodic and positive recurrent (in practice this means that, if the chain is irreducible, it converges to the same limiting distribution no matter where it starts).
Markov chains are a prominent tool for modeling regimes of stochastic systems. Examples: credit rating of a company, number of customers in a shop, cumulative profits in a card game, weather in Florida, etc.
The Markovian property of a stochastic process is independent of its being stationary, Gaussian or a martingale. One property does not imply another.
MARKOV CHAIN REFERENCES
Lawler, G. F. (1995). Introduction to Stochastic Processes. New York: Chapman and Hall/CRC.
Ross, S. M. (1995). Stochastic Processes (2nd ed). New York: Wiley.
Karlin, S., & Taylor, H. M. (1975). A First Course in Stochastic Processes (2nd ed). New York: Academic Press.
Gikhman, I. I., & Skorokhod, A. V. (2004). The Theory of Stochastic Processes II. Springer.
BACK TO THE STATISTICAL ANALYSES DIRECTORY
IMPORTANT LINKS ON THIS SITE