Timezone: »
Machine learning is increasingly targeting areas where input data cannot be accurately described by a single vector, but can be modeled instead using the more flexible concept of random vectors, namely probability measures or more simply point clouds of varying cardinality. Using deep architectures on measures poses, however, many challenging issues. Indeed, deep architectures are originally designed to handle fixed-length vectors, or, using recursive mechanisms, ordered sequences thereof. In sharp contrast, measures describe a varying number of weighted observations with no particular order. We propose in this work a deep framework designed to handle crucial aspects of measures, namely permutation invariances, variations in weights and cardinality. Architectures derived from this pipeline can (i) map measures to measures - using the concept of push-forward operators; (ii) bridge the gap between measures and Euclidean spaces - through integration steps. This allows to design discriminative networks (to classify or reduce the dimensionality of input measures), generative architectures (to synthesize measures) and recurrent pipelines (to predict measure dynamics). We provide a theoretical analysis of these building blocks, review our architectures' approximation abilities and robustness w.r.t. perturbation, and try them on various discriminative and generative tasks.
Author Information
Gwendoline De Bie (Ecole normale supérieure)
Gabriel Peyré (CNRS and ENS)
Marco Cuturi (ENSAE / CREST)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Stochastic Deep Networks »
Wed. Jun 12th 12:15 -- 12:20 AM Room Hall A
More from the Same Authors
-
2022 : Recovering Stochastic Dynamics via Gaussian Schrödinger Bridges »
Ya-Ping Hsieh · Charlotte Bunne · Marco Cuturi · Andreas Krause -
2022 : Recovering Stochastic Dynamics via Gaussian Schrödinger Bridges »
Charlotte Bunne · Ya-Ping Hsieh · Marco Cuturi · Andreas Krause -
2023 : Imposing and learning structure in OT displacements through cost engineering »
Marco Cuturi -
2023 Poster: Fast, Differentiable and Sparse Top-k: a Convex Analysis Perspective »
Michael Sander · Joan Puigcerver · Josip Djolonga · Gabriel Peyré · Mathieu Blondel -
2023 Poster: Monge, Bregman and Occam: Interpretable Optimal Transport in High-Dimensions with Feature-Sparse Maps »
Marco Cuturi · Michal Klein · Pierre Ablin -
2023 Poster: The Monge Gap: A Regularizer to Learn All Transport Maps »
Théo Uscidda · Marco Cuturi -
2022 Poster: Unsupervised Ground Metric Learning Using Wasserstein Singular Vectors »
Geert-Jan Huizing · Laura Cantini · Gabriel Peyré -
2022 Spotlight: Unsupervised Ground Metric Learning Using Wasserstein Singular Vectors »
Geert-Jan Huizing · Laura Cantini · Gabriel Peyré -
2022 Poster: Linear-Time Gromov Wasserstein Distances using Low Rank Couplings and Costs »
Meyer Scetbon · Gabriel Peyré · Marco Cuturi -
2022 Spotlight: Linear-Time Gromov Wasserstein Distances using Low Rank Couplings and Costs »
Meyer Scetbon · Gabriel Peyré · Marco Cuturi -
2021 Poster: Low-Rank Sinkhorn Factorization »
Meyer Scetbon · Marco Cuturi · Gabriel Peyré -
2021 Poster: Momentum Residual Neural Networks »
Michael Sander · Pierre Ablin · Mathieu Blondel · Gabriel Peyré -
2021 Spotlight: Momentum Residual Neural Networks »
Michael Sander · Pierre Ablin · Mathieu Blondel · Gabriel Peyré -
2021 Spotlight: Low-Rank Sinkhorn Factorization »
Meyer Scetbon · Marco Cuturi · Gabriel Peyré -
2020 Poster: Super-efficiency of automatic differentiation for functions defined as a minimum »
Pierre Ablin · Gabriel Peyré · Thomas Moreau -
2019 Poster: Geometric Losses for Distributional Learning »
Arthur Mensch · Mathieu Blondel · Gabriel Peyré -
2019 Oral: Geometric Losses for Distributional Learning »
Arthur Mensch · Mathieu Blondel · Gabriel Peyré -
2017 Poster: Sliced Wasserstein Kernel for Persistence Diagrams »
Mathieu Carrière · Marco Cuturi · Steve Oudot -
2017 Poster: Soft-DTW: a Differentiable Loss Function for Time-Series »
Marco Cuturi · Mathieu Blondel -
2017 Talk: Sliced Wasserstein Kernel for Persistence Diagrams »
Mathieu Carrière · Marco Cuturi · Steve Oudot -
2017 Talk: Soft-DTW: a Differentiable Loss Function for Time-Series »
Marco Cuturi · Mathieu Blondel