Timezone: »
In modern classification tasks, the number of labels is getting larger and larger, as is the size of the datasets encountered in practice. As the number of classes increases, class ambiguity and class imbalance become more and more problematic to achieve high top-1 accuracy. Meanwhile, Top-K metrics (metrics allowing K guesses) have become popular, especially for performance reporting. Yet, proposing top-K losses tailored for deep learning remains a challenge, both theoretically and practically.In this paper we introduce a stochastic top-K hinge loss inspired by recent developments on top-K calibrated losses.Our proposal is based on the smoothing of the top-K operator building on the flexible "perturbed optimizer" framework. We show that our loss function performs very well in the case of balanced datasets, while benefiting from a significantly lower computational time than the state-of-the-art top-K loss functions (for both forward and backward passes). In addition, we propose a simple variant of our loss for the imbalanced case. Experiments on a heavy-tailed dataset show that our loss function significantly outperforms other baseline loss functions.
Author Information
Camille Garcin (Université de Montpellier)
Maximilien Servajean (LIRMM - UPVM)
Alexis Joly (INRIA, FR)
Joseph Salmon (Université de Montpellier)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Stochastic smoothing of the top-K calibrated hinge loss for deep imbalanced classification »
Wed. Jul 20th through Thu the 21st Room Hall E #331
More from the Same Authors
-
2022 Poster: Differentially Private Coordinate Descent for Composite Empirical Risk Minimization »
Paul Mangold · Aurélien Bellet · Joseph Salmon · Marc Tommasi -
2022 Spotlight: Differentially Private Coordinate Descent for Composite Empirical Risk Minimization »
Paul Mangold · Aurélien Bellet · Joseph Salmon · Marc Tommasi -
2020 Poster: Implicit differentiation of Lasso-type models for hyperparameter optimization »
Quentin Bertrand · Quentin Klopfenstein · Mathieu Blondel · Samuel Vaiter · Alexandre Gramfort · Joseph Salmon -
2019 Poster: Optimal Mini-Batch and Step Sizes for SAGA »
Nidham Gazagnadou · Robert Gower · Joseph Salmon -
2019 Poster: Screening rules for Lasso with non-convex Sparse Regularizers »
alain rakotomamonjy · Gilles Gasso · Joseph Salmon -
2019 Oral: Optimal Mini-Batch and Step Sizes for SAGA »
Nidham Gazagnadou · Robert Gower · Joseph Salmon -
2019 Oral: Screening rules for Lasso with non-convex Sparse Regularizers »
alain rakotomamonjy · Gilles Gasso · Joseph Salmon -
2019 Poster: Safe Grid Search with Optimal Complexity »
Eugene Ndiaye · Tam Le · Olivier Fercoq · Joseph Salmon · Ichiro Takeuchi -
2019 Oral: Safe Grid Search with Optimal Complexity »
Eugene Ndiaye · Tam Le · Olivier Fercoq · Joseph Salmon · Ichiro Takeuchi