Timezone: »

On Low Rank Training of Deep Neural Networks
Siddhartha Kamalakara · Acyr Locatelli · Bharat Venkitesh · Jimmy Ba · Yarin Gal · Aidan Gomez

Training deep neural networks in low rank, i.e. with factorised layers, is of particular interest to the community: it offers efficiency over unfactorised training in terms of both memory consumption and training time. Prior work has focused on low rank approximations of pre-trained networks and training in low rank space with additional objectives, offering various ad hoc explanations for chosen practice. We analyse techniques that work well in practice, and through extensive ablations on models such as GPT2 we provide evidence falsifying common beliefs in the field, hinting in the process at exciting research opportunities that still need answering.

Author Information

Siddhartha Kamalakara (ProteinQure)
Acyr Locatelli (Audio Analytic)
Bharat Venkitesh (University of Waterloo)
Jimmy Ba (University of Toronto)
Yarin Gal (University of Oxford)
Aidan Gomez (Google)

More from the Same Authors