Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Duality Principles for Modern Machine Learning

Learning with Primal-Dual Spectral Risk Measures: a Fast Incremental Algorithm

Ronak Mehta · Vincent Roulet · Krishna Pillutla · Zaid Harchaoui

Keywords: [ duality ] [ Convex Optimization ] [ distributional robustness ] [ distributionally robust learning ] [ incremental optimization ] [ spectral risk measures ] [ Stochastic Optimization ]


Abstract:

We consider learning with a generalization of rank-weighted objectives known as spectral risk measures (SRMs). SRMs, like other distributionally robust learning objectives, explicitly capture the worst-case performance over a set of possible deviations from the observed training distribution by introducing dual variables maximizing an adversarial objective. Even when the underlying (regularized) losses are smooth and strongly convex, direct stochastic gradient methods fail to converge to the SRM minimizer due in part to bias. In this work, we introduce a fast incremental optimization algorithm for SRMs that maintains a running estimate of the optimal dual variables by efficiently solving an approximation of the dual problem. Unlike related methods, our approach converges linearly for any smooth SRM and requires tuning a single hyperparameter: the (constant) primal learning rate. Empirically, our optimizer can achieve convergence within 2-3x fewer passes through the training set than recent baselines on distribution shift and fairness benchmarks.

Chat is not available.