Timezone: »
A major problem with Active Learning (AL) is high training costs since models are typically retrained from scratch after every query round. We start by demonstrating that standard AL on neural networks with warm starting fails, both to accelerate training and to avoid catastrophic forgetting when using fine-tuning over AL query rounds. We then develop a new class of techniques, circumventing this problem, by biasing further training towards previously labeled sets, thereby complementing existing work on AL acceleration. We accomplish this by employing existing, and developing novel, replay-based Continual Learning (CL) algorithms that are effective at quickly learning the new without forgetting the old, especially when data comes from an evolving distribution. We call this paradigm "Continual Active Learning" (CAL). We show CAL achieves significant speedups using a plethora of replay schemes that use model distillation and that select diverse/uncertain points from the history.We conduct experiments across many diverse data domains, including natural language, vision, medical imaging, and computational biology, each with very different neural architectures (transformers/CNNs/MLPs) and dataset sizes. CAL consistently provides a 3x reduction in training time, while retaining performance and out-of-distribution robustness, showing its wide applicability.
Author Information
Gantavya Bhatt (University of Washington)
Arnav M Das (University of Washington)
Rui Yang (MSKCC)
Vianne Gao (Memorial Sloan Kettering)
Jeff Bilmes (UW)
More from the Same Authors
-
2021 : Tighter m-DPP Coreset Sample Complexity Bounds »
Gantavya Bhatt · Jeff Bilmes -
2021 : Epiphany: Predicting the Hi-C Contact Map from 1D Epigenomic Data »
Rui Yang · Rui Yang -
2021 : Tighter m-DPP Coreset Sample Complexity Bounds »
Jeff Bilmes · Gantavya Bhatt -
2021 : More Information, Less Data »
Jeff Bilmes · Jeff Bilmes -
2021 : Introduction by the Organizers »
Abir De · Rishabh Iyer · Ganesh Ramakrishnan · Jeff Bilmes -
2021 Workshop: Subset Selection in Machine Learning: From Theory to Applications »
Rishabh Iyer · Abir De · Ganesh Ramakrishnan · Jeff Bilmes -
2020 Poster: Coresets for Data-efficient Training of Machine Learning Models »
Baharan Mirzasoleiman · Jeff Bilmes · Jure Leskovec -
2020 Poster: Time-Consistent Self-Supervision for Semi-Supervised Learning »
Tianyi Zhou · Shengjie Wang · Jeff Bilmes -
2019 : Jeff Bilmes: Deep Submodular Synergies »
Jeff Bilmes -
2019 Poster: Bias Also Matters: Bias Attribution for Deep Neural Network Explanation »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes -
2019 Oral: Bias Also Matters: Bias Attribution for Deep Neural Network Explanation »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes -
2019 Poster: Jumpout : Improved Dropout for Deep Neural Networks with ReLUs »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes -
2019 Poster: Combating Label Noise in Deep Learning using Abstention »
Sunil Thulasidasan · Tanmoy Bhattacharya · Jeff Bilmes · Gopinath Chennupati · Jamal Mohd-Yusof -
2019 Oral: Jumpout : Improved Dropout for Deep Neural Networks with ReLUs »
Shengjie Wang · Tianyi Zhou · Jeff Bilmes -
2019 Oral: Combating Label Noise in Deep Learning using Abstention »
Sunil Thulasidasan · Tanmoy Bhattacharya · Jeff Bilmes · Gopinath Chennupati · Jamal Mohd-Yusof -
2018 Poster: Constrained Interacting Submodular Groupings »
Andrew Cotter · Mahdi Milani Fard · Seungil You · Maya Gupta · Jeff Bilmes -
2018 Poster: Greed is Still Good: Maximizing Monotone Submodular+Supermodular (BP) Functions »
Wenruo Bai · Jeff Bilmes -
2018 Oral: Constrained Interacting Submodular Groupings »
Andrew Cotter · Mahdi Milani Fard · Seungil You · Maya Gupta · Jeff Bilmes -
2018 Oral: Greed is Still Good: Maximizing Monotone Submodular+Supermodular (BP) Functions »
Wenruo Bai · Jeff Bilmes