Skip to yearly menu bar Skip to main content


Poster

Improved Convergence Rates for Sparse Approximation Methods in Kernel-Based Learning

Sattar Vakili · Jonathan Scarlett · Da-shan Shiu · Alberto Bernacchia

Hall E #727

Keywords: [ OPT: Optimization and Learning under Uncertainty ] [ T: Learning Theory ] [ MISC: Online Learning, Active Learning and Bandits ] [ MISC: Scalable Algorithms ] [ MISC: Kernel methods ] [ T: Online Learning and Bandits ] [ PM: Gaussian Processes ]


Abstract: Kernel-based models such as kernel ridge regression and Gaussian processes are ubiquitous in machine learning applications for regression and optimization. It is well known that a major downside for kernel-based models is the high computational cost; given a dataset of $n$ samples, the cost grows as $\mathcal{O}(n^3)$. Existing sparse approximation methods can yield a significant reduction in the computational cost, effectively reducing the actual cost down to as low as $\mathcal{O}(n)$ in certain cases. Despite this remarkable empirical success, significant gaps remain in the existing results for the analytical bounds on the error due to approximation. In this work, we provide novel confidence intervals for the Nystr\"om method and the sparse variational Gaussian process approximation method, which we establish using novel interpretations of the approximate (surrogate) posterior variance of the models. Our confidence intervals lead to improved performance bounds in both regression and optimization problems.

Chat is not available.