abstract: The development of new classification and regression algorithms based on deep neural networks — coined Deep Learning — revolutionized the area of artificial intelligence, machine learning, and data analysis. More recently, these methods have been applied to the numerical solution of high dimensional partial differential equations with great success. This lecture series will summarize some of these developments from the perspective of applied mathematics.
The lectures start with a brief introduction to statistical learning theory and deep learning. Next we will discuss recent results on the representational power of deep neural networks. In particular we will show that the representational power of deep neural networks is superior to the representational power of virtually all classical approximation methods.
Then we will show that the problem of numerically solving a large class of (high-dimensional) PDEs (such as linear Black-Scholes or diffusion equations) can be cast into a classical statistical learning problem which can then be solved by deep learning methods. Simulations suggest that the resulting algorithms are vastly superior to classical methods such as finite element methods, finite difference methods, spectral methods, or sparse tensor methods. In particular we empirically observe that these algorithms are capable of breaking the curse of dimensionality. In the last part of the third lecture we will present theoretical results which confirm this observation.