Timezone: »
It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and may therefore be a desirable property. This paper explores issues with optimization convergence, speed and gradient stability when encouraging or enforcing orthogonality. To perform this analysis, we propose a weight matrix factorization and parameterization strategy through which we can bound matrix norms and therein control the degree of expansivity induced during backpropagation. We find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance.
Author Information
Eugene Vorontsov (MILA)
Chiheb Trabelsi (Ecole Polytechnique de Montreal)
Christopher Pal (École Polytechnique de Montréal)
Samuel Kadoury (Ecole Polytechnique de Montreal)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: On orthogonality and learning RNNs with long term dependencies »
Tue. Aug 8th 08:30 AM -- 12:00 PM Room Gallery #39
More from the Same Authors
-
2020 Poster: AR-DAE: Towards Unbiased Neural Entropy Gradient Estimation »
Jae Hyun Lim · Aaron Courville · Christopher Pal · Chin-Wei Huang -
2018 Poster: Focused Hierarchical RNNs for Conditional Sequence Processing »
Rosemary Nan Ke · Konrad Zolna · Alessandro Sordoni · Zhouhan Lin · Adam Trischler · Yoshua Bengio · Joelle Pineau · Laurent Charlin · Christopher Pal -
2018 Oral: Focused Hierarchical RNNs for Conditional Sequence Processing »
Rosemary Nan Ke · Konrad Zolna · Alessandro Sordoni · Zhouhan Lin · Adam Trischler · Yoshua Bengio · Joelle Pineau · Laurent Charlin · Christopher Pal