Skip to yearly menu bar Skip to main content


Morning Poster
in
Workshop: Artificial Intelligence & Human Computer Interaction

A More Robust Baseline for Active Learning by Injecting Randomness to Uncertainty Sampling

Po-Yi Lu · Chun-Liang Li · Hsuan-Tien (Tien) Lin


Abstract: Active learning is important for human-computer interaction in the domain of machine learning. It strategically selects important unlabeled examples that need human annotation, reducing the labeling workload. One strong baseline strategy for active learning is uncertainty sampling, which determines importance by model uncertainty. Nevertheless, uncertainty sampling sometimes fails to outperform random sampling, thus not achieving the fundamental goal of active learning. To address this, the work investigates a simple yet overlooked remedy: injecting some randomness into uncertainty sampling. The remedy rescues uncertainty sampling from failure cases while maintaining its effectiveness in success cases. Our analysis reveals how the remedy balances the bias in the original uncertainty sampling with a small variance. Furthermore, we empirically demonstrate that injecting a mere $10$% of randomness achieves competitive performance across many benchmark datasets. The findings suggest randomness-injected uncertainty sampling can serve as a more robust baseline and a preferred choice for active learning practitioners.

Chat is not available.