Skip to yearly menu bar Skip to main content


Poster

Simple Stochastic Gradient Methods for Non-Smooth Non-Convex Regularized Optimization

Michael Metel · Akiko Takeda

Pacific Ballroom #104

Keywords: [ Supervised Learning ] [ Sparsity and Compressed Sensing ] [ Non-convex Optimization ]


Abstract:

Our work focuses on stochastic gradient methods for optimizing a smooth non-convex loss function with a non-smooth non-convex regularizer. Research on this class of problem is quite limited, and until recently no non-asymptotic convergence results have been reported. We present two simple stochastic gradient algorithms, for finite-sum and general stochastic optimization problems, which have superior convergence complexities compared to the current state-of-the-art. We also compare our algorithms' performance in practice for empirical risk minimization.

Live content is unavailable. Log in and register to view live content