Skip to yearly menu bar Skip to main content


Invited talk
in
Workshop: Negative Dependence and Submodularity: Theory and Applications in Machine Learning

From random matrices to kernel quadrature: how repulsiveness can speed up Monte Carlo integration

RĂ©mi Bardenet


Abstract: Eigenvalues of many models of random matrices tend to repell each other. They sometimes repell so much that sample averages over these eigenvalues converge much faster than if the eigenvalues were i.i.d. I will first show how to transform this kind of result into a generic importance sampler with mean square error decreasing as $N^{-1-1/d}$, where $N$ is the number of integrand evaluations, and $d$ is the ambient dimension. This result crucially depends on a repulsive point process for the integration nodes, called an orthogonal polynomial ensemble, itself a particular case of determinantal point process (DPP). With more assumptions on the integrand and more general DPPs, I will then show how to obtain faster Monte Carlo rates. Further generalizing to mixtures of DPPs, I will finally show how to obtain tight integration rates for integrands in a large class of reproducing kernel Hilbert spaces. This last result involves a continuous equivalent to volume sampling, a discrete point process of recent interest in numerical linear algebra. This talk is intended to connect to a few other talks of the day, and is based on the following papers: https://arxiv.org/abs/1605.00361 https://arxiv.org/abs/1906.07832 https://arxiv.org/abs/2002.09677

Chat is not available.