Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Data-centric Machine Learning Research (DMLR): Datasets for Foundation Models

Pearls from Pebbles: Improved Confidence Functions for Auto-labeling

Harit Vishwakarma · Yi Chen · Sui Jiet Tay · Satya Sai Srinath Namburi GNVV · Frederic Sala · Ramya Vinayak


Abstract: Auto-labeling is an important family of techniques that produce labeled training sets with minimum manual annotation. A prominent variant, threshold-based auto-labeling (TBAL), works by finding thresholds on a model's confidence scores above which it can accurately automatically label unlabeled data. However, many models are known to produce overconfident scores, leading to poor TBAL performance. While a natural idea is to apply off-the-shelf calibration methods to alleviate the overconfidence issue, we show that such methods fall short. Rather than experimenting with ad-hoc choices of confidence functions, we propose a framework for studying the \emph{optimal} TBAL confidence function. We develop a tractable version of the framework to obtain \texttt{Colander} (Confidence functions for Efficient and Reliable Auto-labeling), a new post-hoc method specifically designed to maximize performance in TBAL systems. We perform an extensive empirical evaluation of \texttt{Colander} and compare it against methods designed for calibration. \texttt{Colander} achieves up to 60\% improvement on coverage over the baselines while maintaining error level below $5\%$ and using the same amount of labeled data.

Chat is not available.