Skip to yearly menu bar Skip to main content


Poster

Adaptive Learning of Density Ratios in RKHS

Werner Zellinger · Stefan Kindermann · Sergei V. Pereverzyev

Hall C 4-9 #2115
[ ]
[ Slides [ Poster [ JMLR
Wed 24 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

Estimating the ratio of two probability densities from finitely many observations of the densities is a central problem in machine learning and statistics with applications in two-sample testing, divergence estimation, generative modeling, covariate shift adaptation, conditional density estimation, and novelty detection. In this work, we analyze a large class of density ratio estimation methods that minimize a regularized Bregman divergence between the true density ratio and a model in a reproducing kernel Hilbert space (RKHS). We derive new finite-sample error bounds, and we propose a Lepskii type parameter choice principle that minimizes the bounds without knowledge of the regularity of the density ratio. In the special case of square loss, our method adaptively achieves a minimax optimal error rate. A numerical illustration is provided.

Chat is not available.