Timezone: »
Deep Metric Learning (DML) is arguably one of the most influential lines of research for learning visual similarities with many proposed approaches every year. Although the field benefits from the rapid progress, the divergence in training protocols, architectures, and parameter choices make an unbiased comparison difficult. To provide a consistent reference point, we revisit the most widely used DML objective functions and conduct a study of the crucial parameter choices as well as the commonly neglected mini-batch sampling process. Under consistent comparison, DML objectives show much higher saturation than indicated by literature. Further based on our analysis, we uncover a correlation between the embedding space density and compression to the generalization performance of DML models. Exploiting these insights, we propose a simple, yet effective, training regularization to reliably boost the performance of ranking-based DML models on various standard benchmark datasets; code and a publicly accessible WandB-repo are available at https://github.com/Confusezius/RevisitingDeepMetricLearningPyTorch.
Author Information
Karsten Roth (Heidelberg University, Mila)
Timo Milbich (Heidelberg University)
Samrath Sinha (University of Toronto)
Prateek Gupta (University of Oxford)
Bjorn Ommer (Heidelberg University)
Joseph Paul Cohen (Mila, University of Montreal)
More from the Same Authors
-
2021 : Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation for Chest X-rays »
Joseph Paul Cohen · Rupert Brooks · Evan Zucker · Anuj Pareek · Lungren Matthew · Akshay Chaudhari -
2021 : Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation for Chest X-rays »
Joseph Paul Cohen · Rupert Brooks · Evan Zucker · Anuj Pareek · Lungren Matthew · Akshay Chaudhari -
2021 : Learning by Watching: Physical Imitation of Manipulation Skills from Human Videos »
Haoyu Xiong · Yun-Chun Chen · Homanga Bharadhwaj · Samrath Sinha · Animesh Garg -
2022 Workshop: AI for Agent-Based Modelling (AI4ABM) »
Christian Schroeder · Yang Zhang · Anisoara Calinescu · Dylan Radovic · Prateek Gupta · Jakob Foerster -
2022 Poster: Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics »
Matthias Weissenbacher · Samrath Sinha · Animesh Garg · Yoshinobu Kawahara -
2022 Spotlight: Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics »
Matthias Weissenbacher · Samrath Sinha · Animesh Garg · Yoshinobu Kawahara -
2021 Poster: Simultaneous Similarity-based Self-Distillation for Deep Metric Learning »
Karsten Roth · Timo Milbich · Bjorn Ommer · Joseph Paul Cohen · Marzyeh Ghassemi -
2021 Spotlight: Simultaneous Similarity-based Self-Distillation for Deep Metric Learning »
Karsten Roth · Timo Milbich · Bjorn Ommer · Joseph Paul Cohen · Marzyeh Ghassemi -
2020 : Contributed Talk 4: A Benchmark of Medical Out of Distribution Detection »
Joseph Paul Cohen -
2020 Poster: Small-GAN: Speeding up GAN Training using Core-Sets »
Samrath Sinha · Han Zhang · Anirudh Goyal · Yoshua Bengio · Hugo Larochelle · Augustus Odena -
2019 : Poster Session & Lunch break »
Kay Wiese · Brandon Carter · Dan DeBlasio · Mohammad Hashir · Rachel Chan · Matteo Manica · Ali Oskooei · Zhenqin Wu · Karren Yang · François FAGES · Ruishan Liu · Nicasia Beebe-Wang · Bryan He · Jacopo Cirrone · Pekka Marttinen · Elior Rahmani · Harri Lähdesmäki · Nikhil Yadala · Andreea-Ioana Deac · Ava Soleimany · Mansi Ranjit Mane · Jason Ernst · Joseph Paul Cohen · Joel Mathew · Vishal Agarwal · AN ZHENG