Timezone: »
Measures of similarity (or dissimilarity) are a key ingredient to many machine learning algorithms. We introduce DID, a pairwise dissimilarity measure applicable to a wide range of data spaces, which leverages the data's internal structure to be invariant to diffeomorphisms. We prove that DID enjoys properties which make it relevant for theoretical study and practical use. By representing each datum as a function, DID is defined as the solution to an optimization problem in a Reproducing Kernel Hilbert Space and can be expressed in closed-form. In practice, it can be efficiently approximated via Nyström sampling. Empirical experiments support the merits of DID.
Author Information
Théophile Cantelobre (Inria)

PhD Candidate @ SIERRA Team (Inria Paris & Inria London Programme) I am interested in building machine learning methods that are provably efficient and useful in practice. I am a PhD candidate in Computer Science based in the SIERRA project-team (Inria Paris DI-ENS) and in the Inria London Programme, supervised by Alessandro Rudi and Benjamin Guedj. Before my PhD, I studied Mathematics & Engineering in a dual masters program between Mines ParisTech (Cycle Ingénieur Civil) and Sorbonne Université (M2A) in Paris, France. In the past, I worked on PAC-Bayes guarantess for structured prediction at Inria & UCL, and state estimation for underwater robotics at Schlumberger-Doll Research.
Carlo Ciliberto (University College London)
Benjamin Guedj (Inria and University College London)

Benjamin Guedj is a tenured research scientist at Inria (France) and a senior research scientist at University College London (UK). His main research areas are statistical learning theory, PAC-Bayes, machine learning and computational statistics. He obtained a PhD in mathematics from Sorbonne Université (formerly Université Pierre et Marie Curie, France) in 2013.
Alessandro Rudi (INRIA, École Normale Supérieure)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Measuring dissimilarity with diffeomorphism invariance »
Thu. Jul 21st through Fri the 22nd Room Hall E #517
More from the Same Authors
-
2023 Workshop: PAC-Bayes Meets Interactive Learning »
Hamish Flynn · Maxime Heuillet · Audrey Durand · Melih Kandemir · Benjamin Guedj -
2023 Poster: Cluster-Specific Predictions with Multi-Task Gaussian Processes »
Arthur Leroy · Pierre Latouche · Benjamin Guedj · Servane Gey -
2022 Poster: Nyström Kernel Mean Embeddings »
Antoine Chatalic · Nicolas Schreuder · Lorenzo Rosasco · Alessandro Rudi -
2022 Poster: Distribution Regression with Sliced Wasserstein Kernels »
Dimitri Marie Meunier · Massimiliano Pontil · Carlo Ciliberto -
2022 Spotlight: Distribution Regression with Sliced Wasserstein Kernels »
Dimitri Marie Meunier · Massimiliano Pontil · Carlo Ciliberto -
2022 Spotlight: Nyström Kernel Mean Embeddings »
Antoine Chatalic · Nicolas Schreuder · Lorenzo Rosasco · Alessandro Rudi -
2022 Poster: Non-Vacuous Generalisation Bounds for Shallow Neural Networks »
Feix Biggs · Benjamin Guedj -
2022 Spotlight: Non-Vacuous Generalisation Bounds for Shallow Neural Networks »
Feix Biggs · Benjamin Guedj -
2021 Poster: Disambiguation of Weak Supervision leading to Exponential Convergence rates »
Vivien Cabannnes · Francis Bach · Alessandro Rudi -
2021 Spotlight: Disambiguation of Weak Supervision leading to Exponential Convergence rates »
Vivien Cabannnes · Francis Bach · Alessandro Rudi -
2020 Poster: Consistent Structured Prediction with Max-Min Margin Markov Networks »
Alex Nowak · Francis Bach · Alessandro Rudi -
2019 Poster: Learning-to-Learn Stochastic Gradient Descent with Biased Regularization »
Giulia Denevi · Carlo Ciliberto · Riccardo Grazzi · Massimiliano Pontil -
2019 Oral: Learning-to-Learn Stochastic Gradient Descent with Biased Regularization »
Giulia Denevi · Carlo Ciliberto · Riccardo Grazzi · Massimiliano Pontil -
2019 Poster: Leveraging Low-Rank Relations Between Surrogate Tasks in Structured Prediction »
Giulia Luise · Dimitrios Stamos · Massimiliano Pontil · Carlo Ciliberto -
2019 Poster: Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation »
Ruohan Wang · Carlo Ciliberto · Pierluigi Vito Amadori · Yiannis Demiris -
2019 Oral: Leveraging Low-Rank Relations Between Surrogate Tasks in Structured Prediction »
Giulia Luise · Dimitrios Stamos · Massimiliano Pontil · Carlo Ciliberto -
2019 Oral: Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation »
Ruohan Wang · Carlo Ciliberto · Pierluigi Vito Amadori · Yiannis Demiris -
2019 Tutorial: A Primer on PAC-Bayesian Learning »
Benjamin Guedj · John Shawe-Taylor