Timezone: »
Annotating datasets is one of the main costs in nowadays supervised learning. The goal of weak supervision is to enable models to learn using only forms of labelling which are cheaper to collect, as partial labelling. This is a type of incomplete annotation where, for each datapoint, supervision is cast as a set of labels containing the real one. The problem of supervised learning with partial labelling has been studied for specific instances such as classification, multi-label, ranking or segmentation, but a general framework is still missing. This paper provides a unified framework based on structured prediction and on the concept of {\em infimum loss} to deal with partial labelling over a wide family of learning problems and loss functions. The framework leads naturally to explicit algorithms that can be easily implemented and for which proved statistical consistency and learning rates. Experiments confirm the superiority of the proposed approach over commonly used baselines.
Author Information
Vivien Cabannnes (INRIA)
Alessandro Rudi (École Normale Supérieure )
Francis Bach (INRIA - Ecole Normale Supérieure)
More from the Same Authors
-
2023 : Hessian Inertia in Neural Networks »
Xuchan Bao · Alberto Bietti · Aaron Defazio · Vivien Cabannnes -
2023 : Differentiable Clustering and Partial Fenchel-Young Losses »
Lawrence Stewart · Francis Bach · Felipe Llinares-Lopez · Quentin Berthet -
2023 Poster: The SSL Interplay: Augmentations, Inductive Bias, and Generalization »
Vivien Cabannnes · Bobak T Kiani · Randall Balestriero · Yann LeCun · Alberto Bietti -
2023 Poster: On Bridging the Gap between Mean Field and Finite Width Deep Random Multilayer Perceptron with Batch Normalization »
Amir Joudaki · Hadi Daneshmand · Francis Bach -
2023 Poster: Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy »
Blake Woodworth · Konstantin Mishchenko · Francis Bach -
2022 Poster: Convergence of Uncertainty Sampling for Active Learning »
Anant Raj · Francis Bach -
2022 Spotlight: Convergence of Uncertainty Sampling for Active Learning »
Anant Raj · Francis Bach -
2022 Poster: Anticorrelated Noise Injection for Improved Generalization »
Antonio Orvieto · Hans Kersting · Frank Proske · Francis Bach · Aurelien Lucchi -
2022 Spotlight: Anticorrelated Noise Injection for Improved Generalization »
Antonio Orvieto · Hans Kersting · Frank Proske · Francis Bach · Aurelien Lucchi -
2021 Poster: Disambiguation of Weak Supervision leading to Exponential Convergence rates »
Vivien Cabannnes · Francis Bach · Alessandro Rudi -
2021 Spotlight: Disambiguation of Weak Supervision leading to Exponential Convergence rates »
Vivien Cabannnes · Francis Bach · Alessandro Rudi -
2020 : Q&A with Francis Bach »
Francis Bach -
2020 : Talk by Francis Bach - Second Order Strikes Back - Globally convergent Newton methods for ill-conditioned generalized self-concordant Losses »
Francis Bach -
2020 Poster: Stochastic Optimization for Regularized Wasserstein Estimators »
Marin Ballu · Quentin Berthet · Francis Bach -
2020 Poster: Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization »
Hadrien Hendrikx · Lin Xiao · Sebastien Bubeck · Francis Bach · Laurent Massoulié -
2020 Poster: Consistent Structured Prediction with Max-Min Margin Markov Networks »
Alex Nowak · Francis Bach · Alessandro Rudi -
2019 Invited Talk: Online Dictionary Learning for Sparse Coding »
Julien Mairal · Francis Bach · Jean Ponce · Guillermo Sapiro -
2017 Poster: Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks »
Kevin Scaman · Francis Bach · Sebastien Bubeck · Yin Tat Lee · Laurent Massoulié -
2017 Talk: Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks »
Kevin Scaman · Francis Bach · Sebastien Bubeck · Yin Tat Lee · Laurent Massoulié