Timezone: »
The ADAM optimizer is exceedingly popular in the deep learning community. Often it works very well, sometimes it doesn’t. Why? We interpret ADAM as a combination of two aspects: for each weight, the update direction is determined by the sign of stochastic gradients, whereas the update magnitude is determined by an estimate of their relative variance. We disentangle these two aspects and analyze them in isolation, gaining insight into the mechanisms underlying ADAM. This analysis also extends recent results on adverse effects of ADAM on generalization, isolating the sign aspect as the problematic one. Transferring the variance adaptation to SGD gives rise to a novel method, completing the practitioner’s toolbox for problems where ADAM fails.
Author Information
Lukas Balles (Max Planck Institute for Intelligent Systems)
Philipp Hennig (University of Tübingen)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients »
Thu. Jul 12th 02:20 -- 02:40 PM Room A9
More from the Same Authors
-
2021 Poster: High-Dimensional Gaussian Process Inference with Derivatives »
Filip de Roos · Alexandra Gessner · Philipp Hennig -
2021 Spotlight: High-Dimensional Gaussian Process Inference with Derivatives »
Filip de Roos · Alexandra Gessner · Philipp Hennig -
2021 Poster: Bayesian Quadrature on Riemannian Data Manifolds »
Christian Fröhlich · Alexandra Gessner · Philipp Hennig · Bernhard Schölkopf · Georgios Arvanitidis -
2021 Spotlight: Bayesian Quadrature on Riemannian Data Manifolds »
Christian Fröhlich · Alexandra Gessner · Philipp Hennig · Bernhard Schölkopf · Georgios Arvanitidis -
2021 Poster: Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers »
Robin M Schmidt · Frank Schneider · Philipp Hennig -
2021 Spotlight: Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers »
Robin M Schmidt · Frank Schneider · Philipp Hennig