# Bernstein–von Mises theorem

# Bernstein–von Mises theorem

In Bayesian inference, the **Bernstein–von Mises theorem** provides the basis for the important result that the posterior distribution for unknown quantities in any problem is effectively asymptotically independent of the prior distribution (assuming it obeys Cromwell's rule) as the data sample grows large.^{[1]}

History

The theorem is named after Richard von Mises and S. N. Bernstein although the first proper proof was given by Joseph L. Doob in 1949 for random variables with finite probability space.^{[2]} Later Lucien Le Cam, his PhD student Lorraine Schwartz, David A. Freedman and Persi Diaconis extended the proof under more general assumptions.

Limitations

A remarkable result was found by Freedman in 1965: the Bernstein–von Mises theorem does not hold almost surely if the random variable has an infinite countable probability space; however this depends on allowing a very broad range of possible priors. In practice, the priors used typically in research do have the desirable property even with an infinite countable probability space.

Different summary statistics such as the mode and mean may behave differently in the posterior distribution. In Freedman's examples, the posterior density and its mean, can converge on the wrong result, but the posterior mode is consistent and will converge on the correct result.

Quotations

The statistician A. W. F. Edwards has remarked, "It is sometimes said, in defence of the Bayesian concept, that the choice of prior distribution is unimportant in practice, because it hardly influences the posterior distribution at all when there are moderate amounts of data. The less said about this 'defence' the better."^{[3]}

## References

*Asymptotic Statistics*. Cambridge University Press. ISBN 0-521-78450-6.

*Colloq. Intern. du C.N.R.S (Paris)*.

**13**: 23–27.

*Likelihood*. Baltimore: Johns Hopkins University Press. ISBN 0-8018-4443-6.