Timezone: »
Building upon recent advances in entropy-regularized optimal transport and upon Fenchel duality between measures and continuous functions, we propose in this paper a generalization of the logistic loss, incorporating a metric or cost between classes. Unlike previous attempts to use optimal transport distances for learning, our loss results in unconstrained convex objective functions, supports infinite (or very large) class spaces, and naturally defines a geometric generalization of the softmax operator. The geometric properties of this loss makes it suitable for predicting sparse and singular distributions, for instance supported on curves or hyper-surfaces. We study the theoretical properties of our loss and showcase its effectiveness on two applications: ordinal regression and drawing generation.
Author Information
Arthur Mensch (ENS)
Mathieu Blondel (NTT)
Gabriel Peyré (CNRS and ENS)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Geometric Losses for Distributional Learning »
Fri. Jun 14th 01:30 -- 04:00 AM Room Pacific Ballroom #179
More from the Same Authors
-
2023 Poster: Fast, Differentiable and Sparse Top-k: a Convex Analysis Perspective »
Michael Sander · Joan Puigcerver · Josip Djolonga · Gabriel Peyré · Mathieu Blondel -
2022 Poster: Unsupervised Ground Metric Learning Using Wasserstein Singular Vectors »
Geert-Jan Huizing · Laura Cantini · Gabriel Peyré -
2022 Spotlight: Unsupervised Ground Metric Learning Using Wasserstein Singular Vectors »
Geert-Jan Huizing · Laura Cantini · Gabriel Peyré -
2022 Poster: Linear-Time Gromov Wasserstein Distances using Low Rank Couplings and Costs »
Meyer Scetbon · Gabriel Peyré · Marco Cuturi -
2022 Spotlight: Linear-Time Gromov Wasserstein Distances using Low Rank Couplings and Costs »
Meyer Scetbon · Gabriel Peyré · Marco Cuturi -
2021 Poster: Low-Rank Sinkhorn Factorization »
Meyer Scetbon · Marco Cuturi · Gabriel Peyré -
2021 Poster: Momentum Residual Neural Networks »
Michael Sander · Pierre Ablin · Mathieu Blondel · Gabriel Peyré -
2021 Spotlight: Momentum Residual Neural Networks »
Michael Sander · Pierre Ablin · Mathieu Blondel · Gabriel Peyré -
2021 Spotlight: Low-Rank Sinkhorn Factorization »
Meyer Scetbon · Marco Cuturi · Gabriel Peyré -
2020 Poster: Extra-gradient with player sampling for faster convergence in n-player games »
Samy Jelassi · Carles Domingo-Enrich · Damien Scieur · Arthur Mensch · Joan Bruna -
2020 Poster: Super-efficiency of automatic differentiation for functions defined as a minimum »
Pierre Ablin · Gabriel Peyré · Thomas Moreau -
2020 Poster: Fast Differentiable Sorting and Ranking »
Mathieu Blondel · Olivier Teboul · Quentin Berthet · Josip Djolonga -
2020 Poster: Implicit differentiation of Lasso-type models for hyperparameter optimization »
Quentin Bertrand · Quentin Klopfenstein · Mathieu Blondel · Samuel Vaiter · Alexandre Gramfort · Joseph Salmon -
2019 Poster: Stochastic Deep Networks »
Gwendoline De Bie · Gabriel Peyré · Marco Cuturi -
2019 Oral: Stochastic Deep Networks »
Gwendoline De Bie · Gabriel Peyré · Marco Cuturi -
2018 Poster: Differentiable Dynamic Programming for Structured Prediction and Attention »
Arthur Mensch · Mathieu Blondel -
2018 Oral: Differentiable Dynamic Programming for Structured Prediction and Attention »
Arthur Mensch · Mathieu Blondel -
2018 Poster: SparseMAP: Differentiable Sparse Structured Inference »
Vlad Niculae · Andre Filipe Torres Martins · Mathieu Blondel · Claire Cardie -
2018 Oral: SparseMAP: Differentiable Sparse Structured Inference »
Vlad Niculae · Andre Filipe Torres Martins · Mathieu Blondel · Claire Cardie -
2017 Poster: Soft-DTW: a Differentiable Loss Function for Time-Series »
Marco Cuturi · Mathieu Blondel -
2017 Talk: Soft-DTW: a Differentiable Loss Function for Time-Series »
Marco Cuturi · Mathieu Blondel