Timezone: »

 
Poster
Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy
Blake Woodworth · Konstantin Mishchenko · Francis Bach

Wed Jul 26 02:00 PM -- 03:30 PM (PDT) @ Exhibit Hall 1 #134
We present an algorithm for minimizing an objective with hard-to-compute gradients by using a related, easier-to-access function as a proxy. Our algorithm is based on approximate proximal-point iterations on the proxy combined with relatively few stochastic gradients from the objective. When the difference between the objective and the proxy is $\delta$-smooth, our algorithm guarantees convergence at a rate matching stochastic gradient descent on a $\delta$-smooth objective, which can lead to substantially better sample efficiency. Our algorithm has many potential applications in machine learning, and provides a principled means of leveraging synthetic data, physics simulators, mixed public and private data, and more.

Author Information

Blake Woodworth (Inria)
Konstantin Mishchenko (Samsung)
Francis Bach (INRIA - Ecole Normale Supérieure)

More from the Same Authors