Skip to yearly menu bar Skip to main content


Poster
in
Workshop: DMLR Workshop: Data-centric Machine Learning Research

Taming Small-sample Bias in Low-budget Active Learning

Linxin Song · Jieyu Zhang · Xiaotian Lu · Tianyi Zhou


Abstract:

Active learning (AL) aims to minimize the annotation cost by only querying a few informative examples for each model training stage. However, training a model on a few queried examples suffers from the small-sample bias. In this paper, we address this small-sample bias issue in low-budget AL by exploring a regularizer called Firth bias reduction, which can provably reduce the bias during the model training process but might hinder learning if its coefficient is not adaptive to the learning progress. Instead of tuning the coefficient for each query round, which is sensitive and time-consuming, we propose the curriculum Firth bias reduction (CHAIN) that can automatically adjust the coefficient to be adaptive to the training process. Under both deep learning and linear model settings, experiments on three benchmark datasets with several widely used query strategies and hyperparameter searching methods show that CHAIN can be used to build more efficient AL and can substantially improve the progress made by each active learning query.

Chat is not available.