Statistical & Financial Consulting by Stanford PhD
Home Page
MARKOV CHAIN MONTE CARLO

Markov Chain Monte Carlo (MCMC) methods attempt to simulate realizations from some complex distribution of interest. MCMC approaches are so-named because one uses the previous sample value to randomly generate the next sample value, creating a Markov chain on the way (as the transition probability from x to x' depends on x only). The Markov chain has to converge to the equilibrium distribution. Only then the simulated values can be used. The speed at which the Markov chain converges to the equilibrium distribution is called the mixing speed. A classic example of Markov Chain Monte Carlo is the Metropolis-Hastings algorithm.

Major topics in the Markov Chain Monte Carlo research are the following: 1) designing a Markov chain with high mixing speed, keeping low number of supplementary calculations; 2) designing a Markov chain which is relatively insensitive to the choice of initial values; 3) designing convergence diagnostics. Markov Chain Monte Carlo methods are used in two major contexts: 1) if the distribution is known, approximate some complex characteristic of the distribution by simulating it; 2) if the parameters of the distribution of X are unknown, estimate them using Bayesian analysis: assume that the parameters are randomly distributed according to some prior distribution; simulate them jointly with X in such a way that the simulated values at a given iteration depend on the simulated values at the previous iteration; after the Markov chain has converged estimate the parameters by averaging their simulated values.

Oftentimes, Markov Chain Monte Carlo exploits Gibbs sampler - a way of simulating a random vector if we know the conditional distribution of each coordinate given the other coordinates. We initialize the coordinates at a reasonable set of values. We fix all them but one. We simulate the remaining coordinate conditional on the values of other coordinates. In a similar fashion, we go through all the coordinates multiple times, until the joint distribution of the simulated coordinates has converged to the true distribution.


MARKOV CHAIN MONTE CARLO REFERENCES

Robert, C. P., & Casella, G. (2004). Monte Carlo Statistical Methods (2nd ed.). New York: Springer.

Liu, J. S. (2008). Monte Carlo Strategies in Scientific Computing. Springer.

Brooks, S., Gelman, A., Jones, G., & Meng, X. (2011). Handbook of Markov Chain Monte Carlo. Chapman & Hall / CRC.

Gilks, W. R., Richardson, S., & Spiegelhalter, D. (ed) (1995). Markov Chain Monte Carlo in Practice. Chapman & Hall / CRC.

Berg, B. A. (2004). Markov Chain Monte Carlo Simulations and Their Statistical Analysis (With Web-Based Fortran Code). Hackensack, NJ: World Scientific.

Lemieux, C. (2009). Monte Carlo and Quasi-Monte Carlo Sampling. Springer.

McElreath, R. (2015). Statistical Rethinking: A Bayesian Course with Examples in R and Stan. Chapman & Hall / CRC.


MARKOV CHAIN MONTE CARLO RESOURCES


BACK TO THE
STATISTICAL ANALYSES DIRECTORY


IMPORTANT LINKS ON THIS SITE