Skip to yearly menu bar Skip to main content


Poster

Variance Reduction via Primal-Dual Accelerated Dual Averaging for Nonsmooth Convex Finite-Sums

Chaobing Song · Stephen Wright · Jelena Diakonikolas

Keywords: [ Convex Optimization ]


Abstract: Structured nonsmooth convex finite-sum optimization appears in many machine learning applications, including support vector machines and least absolute deviation. For the primal-dual formulation of this problem, we propose a novel algorithm called \emph{Variance Reduction via Primal-Dual Accelerated Dual Averaging (\vrpda)}. In the nonsmooth and general convex setting, \vrpda~has the overall complexity O(ndlogmin{1/ϵ,n}+d/ϵ) in terms of the primal-dual gap, where n denotes the number of samples, d the dimension of the primal variables, and ϵ the desired accuracy. In the nonsmooth and strongly convex setting, the overall complexity of \vrpda~becomes O(ndlogmin{1/ϵ,n}+d/ϵ) in terms of both the primal-dual gap and the distance between iterate and optimal solution. Both these results for \vrpda~improve significantly on state-of-the-art complexity estimates---which are O(ndlogmin{1/ϵ,n}+nd/ϵ) for the nonsmooth and general convex setting and O(ndlogmin{1/ϵ,n}+nd/ϵ) for the nonsmooth and strongly convex setting---with a simpler and more straightforward algorithm and analysis. Moreover, both complexities are better than \emph{lower} bounds for general convex finite-sum optimization, because our approach makes use of additional, commonly occurring structure. Numerical experiments reveal competitive performance of \vrpda~compared to state-of-the-art approaches.

Chat is not available.