An active learner is given a model class $\Theta$, a large sample of unlabeled data drawn from an underlying distribution and access to a labeling oracle which can provide a label for any of the unlabeled instances. The goal of the learner is to find a model $\theta \in \Theta$ that fits the data to a given accuracy while making as few label queries to the oracle as possible. In this work, we consider a theoretical analysis of the label requirement of active learning for regression under a heteroscedastic noise model.
Previous work has looked at active regression either with no model mismatch~\cite{chaudhuri2015convergence} or with arbitrary model mismatch~\cite{sabato2014active}. In the first case, active learning provided no improvement even in the simple case where the unlabeled examples were drawn from Gaussians. In the second case, under arbitrary model mismatch, the algorithm either required a very high running time or a large number of labels. We provide bounds on the convergence rates of active and passive learning for heteroscedastic regression, where the noise depends on the instance. Our results illustrate that just like in binary classification, some partial knowledge of the nature of the noise can lead to significant gains in the label requirement of active learning.
Author Information
Kamalika Chaudhuri (University of California at San Diego)
Prateek Jain (Microsoft Research)
Nagarajan Natarajan (Microsoft Research)
Related Events (a corresponding poster, oral, or spotlight)

2017 Poster: Active Heteroscedastic Regression »
Mon Aug 7th 06:30  10:00 PM Room Gallery
More from the Same Authors

2019 Poster: SGD without Replacement: Sharper Rates for General Smooth Convex Functions »
Dheeraj Nagaraj · Prateek Jain · Praneeth Netrapalli 
2019 Oral: SGD without Replacement: Sharper Rates for General Smooth Convex Functions »
Dheeraj Nagaraj · Prateek Jain · Praneeth Netrapalli 
2019 Talk: Opening Remarks »
Kamalika Chaudhuri · Ruslan Salakhutdinov 
2018 Poster: Active Learning with Logged Data »
Songbai Yan · Kamalika Chaudhuri · Tara Javidi 
2018 Poster: Analyzing the Robustness of Nearest Neighbors to Adversarial Examples »
Yizhen Wang · Somesh Jha · Kamalika Chaudhuri 
2018 Oral: Active Learning with Logged Data »
Songbai Yan · Kamalika Chaudhuri · Tara Javidi 
2018 Oral: Analyzing the Robustness of Nearest Neighbors to Adversarial Examples »
Yizhen Wang · Somesh Jha · Kamalika Chaudhuri 
2018 Poster: Differentially Private Matrix Completion Revisited »
Prateek Jain · Om Dipakbhai Thakkar · Abhradeep Thakurta 
2018 Oral: Differentially Private Matrix Completion Revisited »
Prateek Jain · Om Dipakbhai Thakkar · Abhradeep Thakurta 
2017 Workshop: Picky Learners: Choosing Alternative Ways to Process Data. »
Corinna Cortes · Kamalika Chaudhuri · Giulia DeSalvo · Ningshan Zhang · Chicheng Zhang 
2017 Workshop: ML on a budget: IoT, Mobile and other tinyML applications »
Manik Varma · Venkatesh Saligrama · Prateek Jain 
2017 Poster: ProtoNN: Compressed and Accurate kNN for Resourcescarce Devices »
Chirag Gupta · ARUN SUGGALA · Ankit Goyal · Saurabh Goyal · Ashish Kumar · Bhargavi Paranjape · Harsha Vardhan Simhadri · Raghavendra Udupa · Manik Varma · Prateek Jain 
2017 Poster: Consistency Analysis for Binary Classification Revisited »
Krzysztof Dembczynski · Wojciech Kotlowski · Sanmi Koyejo · Nagarajan Natarajan 
2017 Talk: ProtoNN: Compressed and Accurate kNN for Resourcescarce Devices »
Chirag Gupta · ARUN SUGGALA · Ankit Goyal · Saurabh Goyal · Ashish Kumar · Bhargavi Paranjape · Harsha Vardhan Simhadri · Raghavendra Udupa · Manik Varma · Prateek Jain 
2017 Talk: Consistency Analysis for Binary Classification Revisited »
Krzysztof Dembczynski · Wojciech Kotlowski · Sanmi Koyejo · Nagarajan Natarajan 
2017 Poster: Recovery Guarantees for Onehiddenlayer Neural Networks »
Kai Zhong · Zhao Song · Prateek Jain · Peter Bartlett · Inderjit Dhillon 
2017 Poster: Nearly Optimal Robust Matrix Completion »
Yeshwanth Cherapanamjeri · Prateek Jain · Kartik Gupta 
2017 Talk: Nearly Optimal Robust Matrix Completion »
Yeshwanth Cherapanamjeri · Prateek Jain · Kartik Gupta 
2017 Talk: Recovery Guarantees for Onehiddenlayer Neural Networks »
Kai Zhong · Zhao Song · Prateek Jain · Peter Bartlett · Inderjit Dhillon