Skip to yearly menu bar Skip to main content


Poster

Conditional Gradient Methods via Stochastic Path-Integrated Differential Estimator

Alp Yurtsever · Suvrit Sra · Volkan Cevher

Pacific Ballroom #85

Keywords: [ Optimization - Others ] [ Online Learning ] [ Non-convex Optimization ] [ Large Scale Learning and Big Data ] [ Convex Optimization ]


Abstract:

We propose a class of variance-reduced stochastic conditional gradient methods. By adopting the recent stochastic path-integrated differential estimator technique (SPIDER) of Fang et. al. (2018) for the classical Frank-Wolfe (FW) method, we introduce SPIDER-FW for finite-sum minimization as well as the more general expectation minimization problems. SPIDER-FW enjoys superior complexity guarantees in the non-convex setting, while matching the best known FW variants in the convex case. We also extend our framework a la conditional gradient sliding (CGS) of Lan & Zhou. (2016), and propose SPIDER-CGS.

Live content is unavailable. Log in and register to view live content