Timezone: »

Iterative Machine Teaching
Weiyang Liu · Bo Dai · Ahmad Humayun · Charlene Tay · Chen Yu · Linda Smith · James Rehg · Le Song

Sun Aug 06 11:24 PM -- 11:42 PM (PDT) @ C4.6 & C4.7

In this paper, we consider the problem of machine teaching, the inverse problem of machine learning. Different from traditional machine teaching which views the learners as batch algorithms, we study a new paradigm where the learner uses an iterative algorithm and a teacher can feed examples sequentially and intelligently based on the current performance of the learner. We show that the teaching complexity in the iterative case is very different from that in the batch case. Instead of constructing a minimal training set for learners, our iterative machine teaching focuses on achieving fast convergence in the learner model. Depending on the level of information the teacher has from the learner model, we design teaching algorithms which can provably reduce the number of teaching examples and achieve faster convergence than learning without teachers. We also validate our theoretical findings with extensive experiments on different data distribution and real image datasets.

Author Information

Weiyang Liu (Georgia Tech)
Bo Dai (Georgia Tech)
Ahmad Humayun (Georgia Institute of Technology)
Charlene Tay (Indiana University)
Chen Yu (Indiana University)
Linda Smith (Indiana University)
James Rehg (Georgia Tech)
Le Song (Georgia Institute of Technology)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors