Timezone: »
Numerous empirical evidence has corroborated that the noise plays a crucial rule in effective and efficient training of deep neural networks. The theory behind, however, is still largely unknown. This paper studies this fundamental problem through training a simple two-layer convolutional neural network model. Although training such a network requires to solve a non-convex optimization problem with a spurious local optimum and a global optimum, we prove that a perturbed gradient descent algorithm in conjunction with noise annealing is guaranteed to converge to a global optimum in polynomial time with arbitrary initialization. This implies that the noise enables the algorithm to efficiently escape from the spurious local optimum. Numerical experiments are provided to support our theory.
Author Information
Mo Zhou (Peking University)
Tianyi Liu (Georgia Institute of Technolodgy)
Yan Li (Georgia Tech)
Dachao Lin (Peking University)
Enlu Zhou
Tuo Zhao (Georgia Institute of Technology)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Toward Understanding the Importance of Noise in Training Neural Networks »
Thu Jun 13th 01:30 -- 04:00 AM Room Pacific Ballroom
More from the Same Authors
-
2020 Poster: Deep Reinforcement Learning with Smooth Policy »
Qianli Shen · Yan Li · Haoming Jiang · Zhaoran Wang · Tuo Zhao -
2019 Poster: On Scalable and Efficient Computation of Large Scale Optimal Transport »
Yujia Xie · Minshuo Chen · Haoming Jiang · Tuo Zhao · Hongyuan Zha -
2019 Oral: On Scalable and Efficient Computation of Large Scale Optimal Transport »
Yujia Xie · Minshuo Chen · Haoming Jiang · Tuo Zhao · Hongyuan Zha -
2017 Poster: Online Partial Least Square Optimization: Dropping Convexity for Better Efficiency and Scalability »
Zhehui Chen · Lin Yang · Chris Junchi Li · Tuo Zhao -
2017 Talk: Online Partial Least Square Optimization: Dropping Convexity for Better Efficiency and Scalability »
Zhehui Chen · Lin Yang · Chris Junchi Li · Tuo Zhao