Skip to yearly menu bar Skip to main content


Efficient learning of smooth probability functions from Bernoulli tests with guarantees

Paul Rolland · Ali Kavis · Alexander Niklaus Immer · Adish Singla · Volkan Cevher

Pacific Ballroom #233

Keywords: [ Kernel Methods ] [ Clustering ] [ Approximate Inference ]


We study the fundamental problem of learning an unknown, smooth probability function via point-wise Bernoulli tests. We provide a scalable algorithm for efficiently solving this problem with rigorous guarantees. In particular, we prove the convergence rate of our posterior update rule to the true probability function in L2-norm. Moreover, we allow the Bernoulli tests to depend on contextual features, and provide a modified inference engine with provable guarantees for this novel setting. Numerical results show that the empirical convergence rates match the theory, and illustrate the superiority of our approach in handling contextual features over the state-of-the-art.

Live content is unavailable. Log in and register to view live content