Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Principled Approaches to Deep Learning

Contributed Presentation 1 - Towards a Deeper Understanding of Training Quantized Neural Networks

[ ]
2017 Talk

Abstract:

Towards a Deeper Understanding of Training Quantized Neural Networks

Hao Li, Soham De, Zheng Xu, Christoph Studer, Hanan Samet, Tom Goldstein

Training neural networks with coarsely quantized weights is a key step towards learning on embedded platforms that have limited computing resources, memory capacity, and power consumption. Numerous recent publications have studied methods for training quantized networks, but these studies have been purely experimental. In this work, we investigate the theory of training quantized neural networks. We analyze the convergence properties of commonly used quantized training methods. We also show that training algorithms that exploit high-precision representations have an important annealing property that purely quantized training methods lack, which explains many of the observed empirical differences between these types of algorithms.

Chat is not available.