Timezone: »
We present a new approach and a novel architecture, termed WSNet, for learning compact and efficient deep neural networks. Existing approaches conventionally learn full model parameters independently and then compress them via ad hoc processing such as model pruning or filter factorization. Alternatively, WSNet proposes learning model parameters by sampling from a compact set of learnable parameters, which naturally enforces parameter sharing throughout the learning process. We demonstrate that such a novel weight sampling approach (and induced WSNet) promotes both weights and computation sharing favorably. By employing this method, we can more efficiently learn much smaller networks with competitive performance compared to baseline networks with equal numbers of convolution filters. Specifically, we consider learning compact and efficient 1D convolutional neural networks for audio classification. Extensive experiments on multiple audio classification datasets verify the effectiveness of WSNet. Combined with weight quantization, the resulted models are up to 180x smaller and theoretically up to 16x faster than the well-established baselines, without noticeable performance drop.
Author Information
Xiaojie Jin (National University of Singapore)
Yingzhen Yang (University of Illinois at Urbana-Champaign)
Ning Xu (Snap)
Jianchao Yang (Bytedance Inc.)
Nebojsa Jojic (Microsoft Research)
Jiashi Feng (National University of Singapore)
Shuicheng Yan (Qihoo/360)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: WSNet: Compact and Efficient Networks Through Weight Sampling »
Fri. Jul 13th 03:00 -- 03:20 PM Room Victoria
More from the Same Authors
-
2022 : Fast Proximal Gradient Descent for Support Regularized Sparse Graph »
Dongfang Sun · Yingzhen Yang -
2021 Poster: CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection »
Hanshu YAN · Jingfeng Zhang · Gang Niu · Jiashi Feng · Vincent Tan · Masashi Sugiyama -
2021 Spotlight: CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection »
Hanshu YAN · Jingfeng Zhang · Gang Niu · Jiashi Feng · Vincent Tan · Masashi Sugiyama -
2021 Poster: Towards Better Laplacian Representation in Reinforcement Learning with Generalized Graph Drawing »
Kaixin Wang · Kuangqi Zhou · Qixin Zhang · Jie Shao · Bryan Hooi · Jiashi Feng -
2021 Spotlight: Towards Better Laplacian Representation in Reinforcement Learning with Generalized Graph Drawing »
Kaixin Wang · Kuangqi Zhou · Qixin Zhang · Jie Shao · Bryan Hooi · Jiashi Feng -
2020 Poster: Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation »
Jian Liang · Dapeng Hu · Jiashi Feng -
2018 Poster: Policy Optimization with Demonstrations »
Bingyi Kang · Zequn Jie · Jiashi Feng -
2018 Oral: Policy Optimization with Demonstrations »
Bingyi Kang · Zequn Jie · Jiashi Feng -
2018 Poster: Understanding Generalization and Optimization Performance of Deep CNNs »
Pan Zhou · Jiashi Feng -
2018 Oral: Understanding Generalization and Optimization Performance of Deep CNNs »
Pan Zhou · Jiashi Feng