Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Subset Selection in Machine Learning: From Theory to Applications

Sparse Bayesian Learning via Stepwise Regression

Sebastian Ament · Carla Gomes

Keywords: [ Non-convex Optimization ]


Abstract:

Sparse Bayesian Learning (SBL) is a powerful paradigm for attaining sparsity in probabilistic models. Herein, we propose a coordinate ascent algorithm for SBL termed Relevance Matching Pursuit (RMP) and show that, as its noise variance parameter goes to zero, RMP exhibits a surprising connection to Stepwise Regression. Further, we derive novel guarantees for Stepwise Regression algorithms, which also shed light on RMP. Our guarantees for Forward Regression improve on deterministic and probabilistic results for Orthogonal Matching Pursuit with noise. Our analysis of Backward Regression on determined systems culminates in a bound on the residual of the optimal solution to the {\it subset selection} problem that, if satisfied, guarantees the {\it optimality} of the result. To our knowledge, this bound is the first that can be computed in polynomial time and depends chiefly on the smallest singular value of the matrix.