Skip to yearly menu bar Skip to main content


Poster

Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy

Blake Woodworth · Konstantin Mishchenko · Francis Bach

Exhibit Hall 1 #134

Abstract: We present an algorithm for minimizing an objective with hard-to-compute gradients by using a related, easier-to-access function as a proxy. Our algorithm is based on approximate proximal-point iterations on the proxy combined with relatively few stochastic gradients from the objective. When the difference between the objective and the proxy is $\delta$-smooth, our algorithm guarantees convergence at a rate matching stochastic gradient descent on a $\delta$-smooth objective, which can lead to substantially better sample efficiency. Our algorithm has many potential applications in machine learning, and provides a principled means of leveraging synthetic data, physics simulators, mixed public and private data, and more.

Chat is not available.