Statistical & Financial Consulting by Stanford PhD

Home Page

Monte Carlo is an estimation procedure. The basic idea is as follows. You want to know the average value of some random variable. You can't work out what its distribution is, exactly, or you don't want to do integrals numerically, but you can take samples from that distribution. (The random variable may, for instance, be some complicated function of variables with simple distributions, or they distribution may have a hard-to-compute normalizing factor ["partition function" in statistical mechanics].) To estimate it, you simply take samples, independently, and average them. If you take enough samples, then the law of large numbers says your average must be close to the true value. The central limit theorem says that your average has a Gaussian distribution around the true value.

Here's one of the canonical examples. Say you want to measure the area of a shape with a complicated, irregular outline. The Monte Carlo approach is to draw a square around the shape and measure the square. Now you throw darts into the square, as uniformly as possible. The fraction of darts falling on the shape gives the ratio of the area of the shape to the area of the square. Now, in fact, you can cast almost any integral problem, or any averaging problem, into this form. So you need a good way to tell if you're inside the outline, and you need a good way to figure out how many darts you should throw. Last but not least, you need a good way to throw darts uniformly, i.e., a good random number generator. That's a whole separate art I shan't attempt to describe (see here instead).

Now, in fact, you don't strictly need to sample independently. You can have dependence, so long as you end up visiting each point just as many times as you would with independent samples. This is useful, since it gives a way to exploit properties of Markov chains in designing your sampling strategy, and even of speeding up the convergence of your estimates to the true averages. (The classic instance of this is the Metropolis-Hastings algorithm, which gives you a way of sampling from a distribution where all you have to know is the ratio of the probability densities at any two points. This is extremely useful when, as in many problems in statistics and statistical mechanics, the density itself contains a complicated normalizing factor; it drops out of the ratio.)

**MONTE CARLO SUBCATEGORIES**

Robert, C. P., & Casella, G. (2004). Monte Carlo Statistical Methods (2nd ed.). New York: Springer.

Lemieux, C. (2009). Monte Carlo and Quasi-Monte Carlo Sampling. Springer.

Liu, J. S. (2008). Monte Carlo Strategies in Scientific Computing. Springer.

Fishman, G. S. (1995). Monte Carlo: Concepts, Algorithms, and Applications. New York: Springer.

Brooks, S., Gelman, A., Jones, G., & Meng, X. (2011). Handbook of Markov Chain Monte Carlo. Chapman & Hall / CRC.

Ross, S. M. (2012). Simulation (5th ed). Academic Press.

Gilks, W. R., Richardson, S., & Spiegelhalter, D. (ed) (1995). Markov Chain Monte Carlo in Practice. Chapman & Hall / CRC.

Berg, B. A. (2004). Markov Chain Monte Carlo Simulations and Their Statistical Analysis (With Web-Based Fortran Code). Hackensack, NJ: World Scientific.

Efron, B., & Hastie, T. (2017). Computer Age Statistical Inference: Algorithms, Evidence, and Data Science. Cambridge University Press.

Glasserman, P. (2003). Monte Carlo Methods in Financial Engineering. Springer-Verlag New York.

Vose, D. (2008). Risk Analysis, A Quantitative Guide (3rd ed.). John Wiley & Sons.

- Monte Carlo Methods in Dynamic and Stochastic Volatility Models, implementation in R, Hedibert.org

- Detailed description of the services offered in the areas of statistical consulting and financial consulting: home page, types of service, experience, case studies, payment options and statistics tutoring
- Directory of financial topics