Oral
LR-GLM: High-Dimensional Bayesian Inference Using Low-Rank Data Approximations
Brian Trippe · Jonathan Huggins · Raj Agrawal · Tamara Broderick
Due to the ease of modern data collection, practitioners often face a large collection of covariates and the need to understand their relation to some response. Generalized linear models (GLMs) offer a particularly interpretable framework for this analysis. In the high-dimensional case without an overwhelming amount of data per parameter, we expect uncertainty to be non-trivial; a Bayesian approach allows coherent quantification of this uncertainty. Unfortunately existing methods for Bayesian inference in GLMs require running times roughly cubic in parameter dimension, thus limiting their applicability in increasingly widespread settings with tens of thousands of parameters. We propose to reduce time and memory costs with a low-rank approximation of the data. We show that our method, which we call LR-GLM, still provides a full Bayesian posterior approximation and admits running time reduced by a full factor of the parameter dimension. We theoretically establish the quality of our approximation via interpretable error bounds and show how the choice of rank allows a tunable computational-statistical trade-off. Experiments support our theory and demonstrate the efficacy of LR-GLM in on real, large-scale datasets.