Timezone: »
Label efficiency has become an increasingly important objective in deep learning applications. Active learning aims to reduce the number of labeled examples needed to train deep networks, but the empirical performance of active learningalgorithms can vary dramatically across datasets and applications. It is difficult to know in advance which active learning strategy will perform well or best in a given application. To address this, we propose the first adaptive algorithm selection strategy for deep active learning. For any unlabeled dataset, our (meta) algorithm TAILOR (Thompson ActIve Learning algORithm selection) iteratively and adaptively chooses among a set of candidate active learning algorithms. TAILOR uses novel reward functions aimed at gathering class-balanced examples. Extensiveexperiments in multi-class and multi-label applications demonstrate TAILOR ’s effectiveness in achieving accuracy comparable or better than that of the best of the candidate algorithms.
Author Information
Jifan Zhang (University of Wisconsin)
Shuai Shao (Meta)
Saurabh Verma (Meta)
Robert Nowak (University of Wisconsion-Madison)

Robert Nowak holds the Nosbusch Professorship in Engineering at the University of Wisconsin-Madison, where his research focuses on signal processing, machine learning, optimization, and statistics.
More from the Same Authors
-
2021 : On the Sparsity of Deep Neural Networks in the Overparameterized Regime: An Empirical Study »
Rahul Parhi · Jack Wolf · Robert Nowak -
2023 : LabelBench: A Comprehensive Framework for Benchmarking Label-Efficient Learning »
Jifan Zhang · Yifang Chen · Gregory Canal · Stephen Mussmann · Yinglun Zhu · Simon Du · Kevin Jamieson · Robert Nowak -
2023 : Looped Transformers are Better at Learning Learning Algorithms »
Liu Yang · Kangwook Lee · Robert Nowak · Dimitris Papailiopoulos -
2023 : SPEED: Experimental Design for Policy Evaluation in Linear Heteroscedastic Bandits »
Subhojyoti Mukherjee · Qiaomin Xie · Josiah Hanna · Robert Nowak -
2023 : A Representer Theorem for Vector-Valued Neural Networks: Insights on Weight Decay Training and Widths of Deep Neural Networks »
Joseph Shenouda · Rahul Parhi · Kangwook Lee · Robert Nowak -
2023 Oral: A Fully First-Order Method for Stochastic Bilevel Optimization »
Jeongyeol Kwon · Dohyun Kwon · Stephen Wright · Robert Nowak -
2023 Poster: A Fully First-Order Method for Stochastic Bilevel Optimization »
Jeongyeol Kwon · Dohyun Kwon · Stephen Wright · Robert Nowak -
2023 Poster: Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization and Detection »
Haoyue Bai · Gregory Canal · Xuefeng Du · Jeongyeol Kwon · Robert Nowak · Sharon Li -
2022 Poster: GALAXY: Graph-based Active Learning at the Extreme »
Jifan Zhang · Julian Katz-Samuels · Robert Nowak -
2022 Spotlight: GALAXY: Graph-based Active Learning at the Extreme »
Jifan Zhang · Julian Katz-Samuels · Robert Nowak -
2022 Poster: Training OOD Detectors in their Natural Habitats »
Julian Katz-Samuels · Julia Nakhleh · Robert Nowak · Sharon Li -
2022 Spotlight: Training OOD Detectors in their Natural Habitats »
Julian Katz-Samuels · Julia Nakhleh · Robert Nowak · Sharon Li -
2021 Poster: Improved Algorithms for Agnostic Pool-based Active Classification »
Julian Katz-Samuels · Jifan Zhang · Lalit Jain · Kevin Jamieson -
2021 Spotlight: Improved Algorithms for Agnostic Pool-based Active Classification »
Julian Katz-Samuels · Jifan Zhang · Lalit Jain · Kevin Jamieson -
2020 Poster: Robust Outlier Arm Identification »
Yinglun Zhu · Sumeet Katariya · Robert Nowak -
2019 Poster: Bilinear Bandits with Low-rank Structure »
Kwang-Sung Jun · Rebecca Willett · Stephen Wright · Robert Nowak -
2019 Oral: Bilinear Bandits with Low-rank Structure »
Kwang-Sung Jun · Rebecca Willett · Stephen Wright · Robert Nowak -
2019 Tutorial: Active Learning: From Theory to Practice »
Robert Nowak · Steve Hanneke -
2017 Poster: Algebraic Variety Models for High-Rank Matrix Completion »
Greg Ongie · Laura Balzano · Rebecca Willett · Robert Nowak -
2017 Talk: Algebraic Variety Models for High-Rank Matrix Completion »
Greg Ongie · Laura Balzano · Rebecca Willett · Robert Nowak