Timezone: »
We employ constraints to control the parameter space of deep neural networks throughout training. The use of customised, appropriately designed constraints can reduce the vanishing/exploding gradients problem, improve smoothness of classification boundaries, control weight magnitudes and stabilize deep neural networks, and thus enhance the robustness of training algorithms and the generalization capabilities of neural networks. We provide a general approach to efficiently incorporate constraints into a stochastic gradient Langevin framework, allowing enhanced exploration of the loss landscape. We also present specific examples of constrained training methods motivated by orthogonality preservation for weight matrices and explicit weight normalizations. Discretization schemes are provided both for the overdamped formulation of Langevin dynamics and the underdamped form, in which momenta further improve sampling efficiency. These optimisation schemes can be used directly, without needing to adapt neural network architecture design choices or to modify the objective with regularization terms, and see performance improvements in classification tasks.
Author Information
Benedict Leimkuhler (University of Edinburgh)
Tiffany Vlaar (University of Edinburgh)
Timothée Pouchon (University of Edinburgh)
Amos Storkey (University of Edinburgh)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Better Training using Weight-Constrained Stochastic Dynamics »
Tue. Jul 20th 04:00 -- 06:00 PM Room
More from the Same Authors
-
2022 : Adversarial robustness of $\beta-$VAE through the lens of local geometry »
Asif Khan · Amos Storkey -
2022 Poster: Multirate Training of Neural Networks »
Tiffany Vlaar · Benedict Leimkuhler -
2022 Spotlight: Multirate Training of Neural Networks »
Tiffany Vlaar · Benedict Leimkuhler -
2022 Poster: What Can Linear Interpolation of Neural Network Loss Landscapes Tell Us? »
Tiffany Vlaar · Jonathan Frankle -
2022 Spotlight: What Can Linear Interpolation of Neural Network Loss Landscapes Tell Us? »
Tiffany Vlaar · Jonathan Frankle -
2021 Poster: Neural Architecture Search without Training »
Joe Mellor · Jack Turner · Amos Storkey · Elliot Crowley -
2021 Oral: Neural Architecture Search without Training »
Joe Mellor · Jack Turner · Amos Storkey · Elliot Crowley