Timezone: »
We propose multirate training of neural networks: partitioning neural network parameters into "fast" and "slow" parts which are trained on different time scales, where slow parts are updated less frequently. By choosing appropriate partitionings we can obtain substantial computational speed-up for transfer learning tasks. We show for applications in vision and NLP that we can fine-tune deep neural networks in almost half the time, without reducing the generalization performance of the resulting models. We analyze the convergence properties of our multirate scheme and draw a comparison with vanilla SGD. We also discuss splitting choices for the neural network parameters which could enhance generalization performance when neural networks are trained from scratch. A multirate approach can be used to learn different features present in the data and as a form of regularization. Our paper unlocks the potential of using multirate techniques for neural network training and provides several starting points for future work in this area.
Author Information
Tiffany Vlaar (University of Edinburgh)
Benedict Leimkuhler (University of Edinburgh)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Multirate Training of Neural Networks »
Thu. Jul 21st through Fri the 22nd Room Hall E #325
More from the Same Authors
-
2022 Poster: What Can Linear Interpolation of Neural Network Loss Landscapes Tell Us? »
Tiffany Vlaar · Jonathan Frankle -
2022 Spotlight: What Can Linear Interpolation of Neural Network Loss Landscapes Tell Us? »
Tiffany Vlaar · Jonathan Frankle -
2021 Poster: Better Training using Weight-Constrained Stochastic Dynamics »
Benedict Leimkuhler · Tiffany Vlaar · Timothée Pouchon · Amos Storkey -
2021 Spotlight: Better Training using Weight-Constrained Stochastic Dynamics »
Benedict Leimkuhler · Tiffany Vlaar · Timothée Pouchon · Amos Storkey