Timezone: »
Optimal transport is a foundational problem in optimization, that allows to compare probability distributions while taking into account geometric aspects. Its optimal objective value, the Wasserstein distance, provides an important loss between distributions that has been used in many applications throughout machine learning and statistics. Recent algorithmic progress on this problem and its regularized versions have made these tools increasingly popular. However, existing techniques require solving an optimization problem to obtain a single gradient of the loss, thus slowing down first-order methods to minimize the sum of losses, that require many such gradient computations. In this work, we introduce an algorithm to solve a regularized version of this problem of Wasserstein estimators, with a time per step which is sublinear in the natural dimensions of the problem. We introduce a dual formulation, and optimize it with stochastic gradient steps that can be computed directly from samples, without solving additional optimization problems at each step. Doing so, the estimation and computation tasks are performed jointly. We show that this algorithm can be extended to other tasks, including estimation of Wasserstein barycenters. We provide theoretical guarantees and illustrate the performance of our algorithm with experiments on synthetic data.
Author Information
Marin Ballu (University of Cambridge)
Quentin Berthet (Google Brain)
Francis Bach (INRIA - Ecole Normale Supérieure)
More from the Same Authors
-
2023 : Differentiable Clustering and Partial Fenchel-Young Losses »
Lawrence Stewart · Francis Bach · Felipe Llinares-Lopez · Quentin Berthet -
2023 : Invited Talk 1: Perturbed Optimizers for Learning »
Quentin Berthet -
2023 Poster: Mirror Sinkhorn: Fast Online Optimization on Transport Polytopes »
Marin Ballu · Quentin Berthet -
2023 Poster: On Bridging the Gap between Mean Field and Finite Width Deep Random Multilayer Perceptron with Batch Normalization »
Amir Joudaki · Hadi Daneshmand · Francis Bach -
2023 Poster: Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy »
Blake Woodworth · Konstantin Mishchenko · Francis Bach -
2022 Poster: Convergence of Uncertainty Sampling for Active Learning »
Anant Raj · Francis Bach -
2022 Spotlight: Convergence of Uncertainty Sampling for Active Learning »
Anant Raj · Francis Bach -
2022 Poster: Anticorrelated Noise Injection for Improved Generalization »
Antonio Orvieto · Hans Kersting · Frank Proske · Francis Bach · Aurelien Lucchi -
2022 Spotlight: Anticorrelated Noise Injection for Improved Generalization »
Antonio Orvieto · Hans Kersting · Frank Proske · Francis Bach · Aurelien Lucchi -
2021 Poster: Disambiguation of Weak Supervision leading to Exponential Convergence rates »
Vivien Cabannnes · Francis Bach · Alessandro Rudi -
2021 Spotlight: Disambiguation of Weak Supervision leading to Exponential Convergence rates »
Vivien Cabannnes · Francis Bach · Alessandro Rudi -
2020 : Q&A with Francis Bach »
Francis Bach -
2020 : Talk by Francis Bach - Second Order Strikes Back - Globally convergent Newton methods for ill-conditioned generalized self-concordant Losses »
Francis Bach -
2020 Poster: Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization »
Hadrien Hendrikx · Lin Xiao · Sebastien Bubeck · Francis Bach · Laurent Massoulié -
2020 Poster: Fast Differentiable Sorting and Ranking »
Mathieu Blondel · Olivier Teboul · Quentin Berthet · Josip Djolonga -
2020 Poster: Consistent Structured Prediction with Max-Min Margin Markov Networks »
Alex Nowak · Francis Bach · Alessandro Rudi -
2020 Poster: Structured Prediction with Partial Labelling through the Infimum Loss »
Vivien Cabannnes · Alessandro Rudi · Francis Bach -
2019 Invited Talk: Online Dictionary Learning for Sparse Coding »
Julien Mairal · Francis Bach · Jean Ponce · Guillermo Sapiro -
2017 Poster: Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks »
Kevin Scaman · Francis Bach · Sebastien Bubeck · Yin Tat Lee · Laurent Massoulié -
2017 Talk: Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks »
Kevin Scaman · Francis Bach · Sebastien Bubeck · Yin Tat Lee · Laurent Massoulié