abstract: The mean field theory is a powerful approach originally developed in the context of statistical mechanics in order to study complex systems in the limit where each degree of freedom interacts with a large number of other ones. Advanced mean field techniques have been fruitfully applied over the years away from canonical physics models, such as spin glasses. They are now crucial tools in the study of a plethora of important inference problems arising in signal processing, machine learning, information theory/error-correcting codes, etc.
In this course I will show that some physical models are intimately linked to problems in Bayesian inference, so that the language of statistical mechanics can be used to describe them. I will then illustrate how mean field theory can be applied in the context of modern inference and learning problems, such as matrix factorization/principal component analysis, compressive sensing or the perceptron neural network. Finally, I will present very recent methods that allow to prove that mean field theory is indeed exact in some settings of inference.