Timezone: »
Teaching dimension (TD) is a fundamental theoretical property for understanding machine teaching algorithms. It measures the sample complexity of teaching a target hypothesis to a learner. The TD of linear learners has been studied extensively, whereas the results of teaching non-linear learners are rare. A recent result investigates the TD of polynomial and Gaussian kernel learners. Unfortunately, the theoretical bounds therein show that the TD is high when teaching those non-linear learners. Inspired by the fact that regularization can reduce the learning complexity in machine learning, a natural question is whether the similar fact happens in machine teaching. To answer this essential question, this paper proposes a unified theoretical framework termed STARKE to analyze the TD of regularized kernel learners. On the basis of STARKE, we derive a generic result of any type of kernels. Furthermore, we disclose that the TD of regularized linear and regularized polynomial kernel learners can be strictly reduced. For regularized Gaussian kernel learners, we reveal that, although their TD is infinite, their epsilon-approximate TD can be exponentially reduced compared with that of the unregularized learners. The extensive experimental results of teaching the optimization-based learners verify the theoretical findings.
Author Information
Hong Qian (East China Normal University)
Xu-Hui Liu (Nanjing University)
Chen-Xi Su (East China Normal University)
Aimin Zhou (East China Normal University)
Yang Yu (Nanjing University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: The Teaching Dimension of Regularized Kernel Learners »
Tue. Jul 19th through Wed the 20th Room Hall E #1210
More from the Same Authors
-
2023 Poster: Policy Regularization with Dataset Constraint for Offline Reinforcement Learning »
Yuhang Ran · Yi-Chen Li · Fuxiang Zhang · Zongzhang Zhang · Yang Yu -
2023 Poster: Model-Bellman Inconsistency for Model-based Offline Reinforcement Learning »
Yihao Sun · Jiaji Zhang · Chengxing Jia · Haoxin Lin · Junyin Ye · Yang Yu -
2022 Poster: Black-Box Tuning for Language-Model-as-a-Service »
Tianxiang Sun · Yunfan Shao · Hong Qian · Xuanjing Huang · Xipeng Qiu -
2022 Spotlight: Black-Box Tuning for Language-Model-as-a-Service »
Tianxiang Sun · Yunfan Shao · Hong Qian · Xuanjing Huang · Xipeng Qiu -
2021 : RL Research-to-RealLife Gap Panel »
Craig Buhr · Jeff Mendenhall · Yang Yu · Matthew Taylor