Timezone: »
In this paper, we consider the problem of Iterative Machine Teaching (IMT), where the teacher provides examples to the learner iteratively such that the learner can achieve fast convergence to a target model. However, existing IMT algorithms are solely based on parameterized families of target models. They mainly focus on convergence in the parameter space, resulting in difficulty when the target models are defined to be functions without dependency on parameters. To address such a limitation, we study a more general task -- Nonparametric Iterative Machine Teaching (NIMT), which aims to teach nonparametric target models to learners in an iterative fashion. Unlike parametric IMT that merely operates in the parameter space, we cast NIMT as a functional optimization problem in the function space. To solve it, we propose both random and greedy functional teaching algorithms. We obtain the iterative teaching dimension (ITD) of the random teaching algorithm under proper assumptions, which serves as a uniform upper bound of ITD in NIMT. Further, the greedy teaching algorithm has a significantly lower ITD, which reaches a tighter upper bound of ITD in NIMT. Finally, we verify the correctness of our theoretical findings with extensive experiments in nonparametric scenarios.
Author Information
CHEN ZHANG (Jilin University)
Xiaofeng Cao (Jilin University)
Weiyang Liu (University of Cambridge)
Ivor Tsang (University of Technology Sydney)
James Kwok (Hong Kong University of Science and Technology)
More from the Same Authors
-
2021 : Towards Principled Disentanglement for Domain Generalization »
Hanlin Zhang · Yi-Fan Zhang · Weiyang Liu · Adrian Weller · Bernhard Schölkopf · Eric Xing -
2023 Poster: Effective Structured Prompting by Meta-Learning and Representative Verbalizer »
Weisen Jiang · Yu Zhang · James Kwok -
2023 Poster: Non-autoregressive Conditional Diffusion Models for Time Series Prediction »
Lifeng Shen · James Kwok -
2022 Poster: Subspace Learning for Effective Meta-Learning »
Weisen Jiang · James Kwok · Yu Zhang -
2022 Spotlight: Subspace Learning for Effective Meta-Learning »
Weisen Jiang · James Kwok · Yu Zhang -
2022 Poster: Efficient Variance Reduction for Meta-learning »
Hansi Yang · James Kwok -
2022 Spotlight: Efficient Variance Reduction for Meta-learning »
Hansi Yang · James Kwok -
2021 Poster: SparseBERT: Rethinking the Importance Analysis in Self-attention »
Han Shi · Jiahui Gao · Xiaozhe Ren · Hang Xu · Xiaodan Liang · Zhenguo Li · James Kwok -
2021 Spotlight: SparseBERT: Rethinking the Importance Analysis in Self-attention »
Han Shi · Jiahui Gao · Xiaozhe Ren · Hang Xu · Xiaodan Liang · Zhenguo Li · James Kwok -
2020 Poster: Searching to Exploit Memorization Effect in Learning with Noisy Labels »
QUANMING YAO · Hansi Yang · Bo Han · Gang Niu · James Kwok -
2019 Poster: Efficient Nonconvex Regularized Tensor Completion with Structure-aware Proximal Iterations »
Quanming Yao · James Kwok · Bo Han -
2019 Oral: Efficient Nonconvex Regularized Tensor Completion with Structure-aware Proximal Iterations »
Quanming Yao · James Kwok · Bo Han -
2018 Poster: Online Convolutional Sparse Coding with Sample-Dependent Dictionary »
Yaqing WANG · Quanming Yao · James Kwok · Lionel NI -
2018 Poster: Lightweight Stochastic Optimization for Minimizing Finite Sums with Infinite Data »
Shuai Zheng · James Kwok -
2018 Oral: Lightweight Stochastic Optimization for Minimizing Finite Sums with Infinite Data »
Shuai Zheng · James Kwok -
2018 Oral: Online Convolutional Sparse Coding with Sample-Dependent Dictionary »
Yaqing WANG · Quanming Yao · James Kwok · Lionel NI -
2017 Poster: Follow the Moving Leader in Deep Learning »
Shuai Zheng · James Kwok -
2017 Talk: Follow the Moving Leader in Deep Learning »
Shuai Zheng · James Kwok