Timezone: »

 
Oral
Teaching a black-box learner
Sanjoy Dasgupta · Daniel Hsu · Stefanos Poulis · Jerry Zhu

Tue Jun 11 11:30 AM -- 11:35 AM (PDT) @ Room 103

One widely-studied model of {\it teaching} calls for a teacher to provide the minimal set of labeled examples that uniquely specifies a target concept. The assumption is that the teacher knows the learner's hypothesis class, which is often not true of real-life teaching scenarios. We consider the problem of teaching a learner whose representation and hypothesis class are unknown---that is, the learner is a black box. We show that a teacher who does not interact with the learner can do no better than providing random examples. We then prove, however, that with interaction, a teacher can efficiently find a set of teaching examples that is a provably good approximation to the optimal set. As an illustration, we show how this scheme can be used to {\it shrink} training sets for any family of classifiers: that is, to find an approximately-minimal subset of training instances that yields the same classifier as the entire set.

Author Information

Sanjoy Dasgupta (UC San Diego)
Daniel Hsu (Columbia University)
Stefanos Poulis (UC San Diego/NTENT)
Jerry Zhu (University of Wisconsin-Madison)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors