Timezone: »

Mitigating Sampling Bias and Improving Robustness in Active Learning
Ranganath Krishnan · Alok Sinha · Nilesh Ahuja · Mahesh Subedar · Omesh Tickoo · Ravi Iyer

This paper presents simple and efficient methods to mitigate sampling bias in active learning while achieving state-of-the-art accuracy and model robustness. We introduce supervised contrastive active learning by leveraging the contrastive loss for active learning under a supervised setting. We propose an unbiased query strategy that selects informative data samples of diverse feature representations with our methods: supervised contrastive active learning (SCAL) and deep feature modeling (DFM). We empirically demonstrate our proposed methods reduce sampling bias, achieve state-of-the-art accuracy and model calibration in an active learning setup with the query computation 26x faster than Bayesian active learning by disagreement and 11x faster than CoreSet. The proposed SCAL method outperforms by a big margin in robustness to dataset shift and out-of-distribution.

Author Information

Ranganath Krishnan (Intel Labs)
Alok Sinha (Intel technology)
Nilesh Ahuja (Intel)
Mahesh Subedar (Intel Corporation)
Omesh Tickoo (Intel)
Ravi Iyer (Intel)

More from the Same Authors

  • 2021 : Poster »
    Shiji Zhou · Nastaran Okati · Wichinpong Sinchaisri · Kim de Bie · Ana Lucic · Mina Khan · Ishaan Shah · JINGHUI LU · Andreas Kirsch · Julius Frost · Ze Gong · Gokul Swamy · Ah Young Kim · Ahmed Baruwa · Ranganath Krishnan