Skip to yearly menu bar Skip to main content


Talk

Iterative Machine Teaching

Weiyang Liu · Bo Dai · Ahmad Humayun · Charlene Tay · Chen Yu · Linda Smith · James Rehg · Le Song

C4.6 & C4.7

Abstract:

In this paper, we consider the problem of machine teaching, the inverse problem of machine learning. Different from traditional machine teaching which views the learners as batch algorithms, we study a new paradigm where the learner uses an iterative algorithm and a teacher can feed examples sequentially and intelligently based on the current performance of the learner. We show that the teaching complexity in the iterative case is very different from that in the batch case. Instead of constructing a minimal training set for learners, our iterative machine teaching focuses on achieving fast convergence in the learner model. Depending on the level of information the teacher has from the learner model, we design teaching algorithms which can provably reduce the number of teaching examples and achieve faster convergence than learning without teachers. We also validate our theoretical findings with extensive experiments on different data distribution and real image datasets.

Live content is unavailable. Log in and register to view live content