Skip to yearly menu bar Skip to main content


Spotlight

Better Training using Weight-Constrained Stochastic Dynamics

Benedict Leimkuhler · Tiffany Vlaar · Timothée Pouchon · Amos Storkey

[ ] [ Livestream: Visit Deep Learning Algorithms 4 ] [ Paper ]
[ Paper ]

Abstract:

We employ constraints to control the parameter space of deep neural networks throughout training. The use of customised, appropriately designed constraints can reduce the vanishing/exploding gradients problem, improve smoothness of classification boundaries, control weight magnitudes and stabilize deep neural networks, and thus enhance the robustness of training algorithms and the generalization capabilities of neural networks. We provide a general approach to efficiently incorporate constraints into a stochastic gradient Langevin framework, allowing enhanced exploration of the loss landscape. We also present specific examples of constrained training methods motivated by orthogonality preservation for weight matrices and explicit weight normalizations. Discretization schemes are provided both for the overdamped formulation of Langevin dynamics and the underdamped form, in which momenta further improve sampling efficiency. These optimisation schemes can be used directly, without needing to adapt neural network architecture design choices or to modify the objective with regularization terms, and see performance improvements in classification tasks.

Chat is not available.