Even Faster Kernel Matrix Linear Algebra via Density Estimation
Rikhav Shah ⋅ Sandeep Silwal ⋅ Haike Xu
Abstract
This paper studies the use of *kernel density estimation* (KDE) for linear algebraic tasks involving the *kernel matrix* of a collection of $n$ data points in $\mathbb{R}^d$. In particular, we improve upon the best existing algorithms for computing the following up to $(1+\varepsilon)$ relative error for a Gaussian kernel matrix and other kernels: matrix-vector products, matrix-matrix products, the spectral norm, and sum of all entries. The runtimes of our algorithms depend linearly on the dimension $d$, sub-quadratically in the number of points $n$, and polynomially on the target error $\varepsilon$. Importantly, the dependence on $n$ in each case is far lower when accessing the kernel matrix through KDE queries as opposed to reading individual entries. Our improvements over existing best algorithms (particularly those of [Backurs et al. ICML `21]) for these tasks reduce the polynomial dependence on $\varepsilon$, and additionally decrease the dependence on $n$ in the case of computing the sum of all entries of the kernel matrix. For example, we reduce the power of $1/\epsilon$ from $\approx 7.7$ to $\approx 3.2$ for a $1-\varepsilon$ relative error estimation of the spectral norm of a Gaussian kernel matrix. We complement our upper bounds with several lower bounds for related problems, which provide (conditional) quadratic time hardness results and additionally hint at the limits of KDE based approaches for the problems we study.
Successful Page Load