Timezone: »
In several recently proposed stochastic optimization methods (e.g. RMSProp, Adam, Adadelta), parameter updates are scaled by the inverse square roots of exponential moving averages of squared past gradients. Maintaining these per-parameter second-moment estimators requires memory equal to the number of parameters. For the case of neural network weight matrices, we propose maintaining only the per-row and per-column sums of these moving averages, and estimating the per-parameter second moments based on these sums. We demonstrate empirically that this method produces similar results to the baseline. Secondly, we show that adaptive methods can produce larger-than-desired updates when the decay rate of the second moment accumulator is too slow. We propose update clipping and a gradually increasing decay rate scheme as remedies. Combining these methods and dropping momentum, we achieve comparable results to the published Adam regime in training the Transformer model on the WMT 2014 English-German machine translation task, while using very little auxiliary storage in the optimizer. Finally, we propose scaling the parameter updates based on the scale of the parameters themselves.
Author Information
Noam Shazeer (Google)
Mitchell Stern (UC Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Adafactor: Adaptive Learning Rates with Sublinear Memory Cost »
Wed. Jul 11th 04:15 -- 07:00 PM Room Hall B #120
More from the Same Authors
-
2019 Poster: Insertion Transformer: Flexible Sequence Generation via Insertion Operations »
Mitchell Stern · William Chan · Jamie Kiros · Jakob Uszkoreit -
2019 Oral: Insertion Transformer: Flexible Sequence Generation via Insertion Operations »
Mitchell Stern · William Chan · Jamie Kiros · Jakob Uszkoreit -
2018 Poster: Image Transformer »
Niki Parmar · Ashish Vaswani · Jakob Uszkoreit · Lukasz Kaiser · Noam Shazeer · Alexander Ku · Dustin Tran -
2018 Poster: Fast Decoding in Sequence Models Using Discrete Latent Variables »
Lukasz Kaiser · Samy Bengio · Aurko Roy · Ashish Vaswani · Niki Parmar · Jakob Uszkoreit · Noam Shazeer -
2018 Oral: Image Transformer »
Niki Parmar · Ashish Vaswani · Jakob Uszkoreit · Lukasz Kaiser · Noam Shazeer · Alexander Ku · Dustin Tran -
2018 Oral: Fast Decoding in Sequence Models Using Discrete Latent Variables »
Lukasz Kaiser · Samy Bengio · Aurko Roy · Ashish Vaswani · Niki Parmar · Jakob Uszkoreit · Noam Shazeer