Timezone: »
One widely-studied model of {\it teaching} calls for a teacher to provide the minimal set of labeled examples that uniquely specifies a target concept. The assumption is that the teacher knows the learner's hypothesis class, which is often not true of real-life teaching scenarios. We consider the problem of teaching a learner whose representation and hypothesis class are unknown---that is, the learner is a black box. We show that a teacher who does not interact with the learner can do no better than providing random examples. We then prove, however, that with interaction, a teacher can efficiently find a set of teaching examples that is a provably good approximation to the optimal set. As an illustration, we show how this scheme can be used to {\it shrink} training sets for any family of classifiers: that is, to find an approximately-minimal subset of training instances that yields the same classifier as the entire set.
Author Information
Sanjoy Dasgupta (UC San Diego)
Daniel Hsu (Columbia University)
Stefanos Poulis (UC San Diego/NTENT)
Jerry Zhu (University of Wisconsin-Madison)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Teaching a black-box learner »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #183
More from the Same Authors
-
2021 : Corruption Robust Offline Reinforcement Learning »
Xuezhou Zhang · Yiding Chen · Jerry Zhu · Wen Sun -
2022 : Simple and near-optimal algorithms for hidden stratification and multi-group learning »
Christopher Tosh · Daniel Hsu -
2022 Poster: Out-of-Distribution Detection with Deep Nearest Neighbors »
Yiyou Sun · Yifei Ming · Jerry Zhu · Yixuan Li -
2022 Poster: Constants Matter: The Performance Gains of Active Learning »
Stephen Mussmann · Sanjoy Dasgupta -
2022 Spotlight: Constants Matter: The Performance Gains of Active Learning »
Stephen Mussmann · Sanjoy Dasgupta -
2022 Spotlight: Out-of-Distribution Detection with Deep Nearest Neighbors »
Yiyou Sun · Yifei Ming · Jerry Zhu · Yixuan Li -
2022 Poster: Simple and near-optimal algorithms for hidden stratification and multi-group learning »
Christopher Tosh · Daniel Hsu -
2022 Poster: Framework for Evaluating Faithfulness of Local Explanations »
Sanjoy Dasgupta · Nave Frost · Michal Moshkovitz -
2022 Spotlight: Simple and near-optimal algorithms for hidden stratification and multi-group learning »
Christopher Tosh · Daniel Hsu -
2022 Spotlight: Framework for Evaluating Faithfulness of Local Explanations »
Sanjoy Dasgupta · Nave Frost · Michal Moshkovitz -
2021 Poster: Robust Policy Gradient against Strong Data Corruption »
Xuezhou Zhang · Yiding Chen · Jerry Zhu · Wen Sun -
2021 Spotlight: Robust Policy Gradient against Strong Data Corruption »
Xuezhou Zhang · Yiding Chen · Jerry Zhu · Wen Sun -
2020 Poster: Adaptive Reward-Poisoning Attacks against Reinforcement Learning »
Xuezhou Zhang · Yuzhe Ma · Adish Singla · Jerry Zhu -
2020 Poster: Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning »
Amin Rakhsha · Goran Radanovic · Rati Devidze · Jerry Zhu · Adish Singla -
2020 Poster: Explainable k-Means and k-Medians Clustering »
Michal Moshkovitz · Sanjoy Dasgupta · Cyrus Rashtchian · Nave Frost -
2019 Poster: A Gradual, Semi-Discrete Approach to Generative Network Training via Explicit Wasserstein Minimization »
Yucheng Chen · Matus Telgarsky · Chao Zhang · Bolton Bailey · Daniel Hsu · Jian Peng -
2019 Oral: A Gradual, Semi-Discrete Approach to Generative Network Training via Explicit Wasserstein Minimization »
Yucheng Chen · Matus Telgarsky · Chao Zhang · Bolton Bailey · Daniel Hsu · Jian Peng -
2018 Tutorial: Understanding your Neighbors: Practical Perspectives From Modern Analysis »
Sanjoy Dasgupta · Samory Kpotufe -
2017 Poster: Diameter-Based Active Learning »
Christopher Tosh · Sanjoy Dasgupta -
2017 Talk: Diameter-Based Active Learning »
Christopher Tosh · Sanjoy Dasgupta