Timezone: »
Quantized neural networks have drawn a lot of attention as they reduce the space and computational complexity during the inference. Moreover, there has been folklore that quantization acts as an implicit regularizer and thus can improve the generalizability of neural networks, yet no existing work formalizes this interesting folklore. In this paper, we take the binary weights in a neural network as random variables under stochastic rounding, and study the distribution propagation over different layers in the neural network. We propose a \emph{quasi neural network} to approximate the distribution propagation, which is a neural network with continuous parameters and smooth activation function. We derive the neural tangent kernel (NTK) for this quasi neural network, and show the eigenvalue of NTK decays at approximately exponential rate, which is comparable to that of Gaussian kernel with randomized scale. We use experiments to verify that the quasi neural network we proposed can well approximate binary weight neural network. Lastly, binary weight neural network gives a lower generalization gap compared with real value weight neural network.
Author Information
Kaiqi Zhang (UCSB)
Ming Yin (UCSB/Princeton)
Yu-Xiang Wang (UC Santa Barbara / Amazon)
More from the Same Authors
-
2021 : Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings »
Ming Yin · Yu-Xiang Wang -
2021 : Near-Optimal Offline Reinforcement Learning via Double Variance Reduction »
Ming Yin · Yu Bai · Yu-Xiang Wang -
2022 : Optimal Dynamic Regret in LQR Control »
Dheeraj Baby · Yu-Xiang Wang -
2023 : A Privacy-Friendly Approach to Data Valuation »
Jiachen Wang · Yuqing Zhu · Yu-Xiang Wang · Ruoxi Jia · Prateek Mittal -
2023 : Generative Autoencoders as Watermark Attackers: Analyses of Vulnerabilities and Threats »
Xuandong Zhao · Kexun Zhang · Yu-Xiang Wang · Lei Li -
2023 : Provable Robust Watermarking for AI-Generated Text »
Xuandong Zhao · Prabhanjan Ananth · Lei Li · Yu-Xiang Wang -
2023 Poster: Offline Reinforcement Learning with Closed-Form Policy Improvement Operators »
Jiachen Li · Edwin Zhang · Ming Yin · Jerry Bai · Yu-Xiang Wang · William Wang -
2023 Poster: Protecting Language Generation Models via Invisible Watermarking »
Xuandong Zhao · Yu-Xiang Wang · Lei Li -
2023 Poster: Differentially Private Optimization on Large Model at Small Cost »
Zhiqi Bu · Yu-Xiang Wang · Sheng Zha · George Karypis -
2023 Poster: Non-stationary Reinforcement Learning under General Function Approximation »
Songtao Feng · Ming Yin · Ruiquan Huang · Yu-Xiang Wang · Jing Yang · Yingbin LIANG -
2023 Poster: Global Optimization with Parametric Function Approximation »
Chong Liu · Yu-Xiang Wang -
2022 Poster: Sample-Efficient Reinforcement Learning with loglog(T) Switching Cost »
Dan Qiao · Ming Yin · Ming Min · Yu-Xiang Wang -
2022 Spotlight: Sample-Efficient Reinforcement Learning with loglog(T) Switching Cost »
Dan Qiao · Ming Yin · Ming Min · Yu-Xiang Wang