Timezone: »
Stochastic gradient descent with momentum (SGDm) is one of the most popular optimization algorithms in deep learning. While there is a rich theory of SGDm for convex problems, the theory is considerably less developed in the context of deep learning where the problem is non-convex and the gradient noise might exhibit a heavy-tailed behavior, as empirically observed in recent studies. In this study, we consider a \emph{continuous-time} variant of SGDm, known as the underdamped Langevin dynamics (ULD), and investigate its asymptotic properties under heavy-tailed perturbations. Supported by recent studies from statistical physics, we argue both theoretically and empirically that the heavy-tails of such perturbations can result in a bias even when the step-size is small, in the sense that \emph{the optima of stationary distribution} of the dynamics might not match \emph{the optima of the cost function to be optimized}. As a remedy, we develop a novel framework, which we coin as \emph{fractional} ULD (FULD), and prove that FULD targets the so-called Gibbs distribution, whose optima exactly match the optima of the original cost. We observe that the Euler discretization of FULD has noteworthy algorithmic similarities with \emph{natural gradient} methods and \emph{gradient clipping}, bringing a new perspective on understanding their role in deep learning. We support our theory with experiments conducted on a synthetic model and neural networks.
Author Information
Umut Simsekli (Institut Polytechnique de Paris / University of Oxford)
Lingjiong Zhu (FSU)
Yee-Whye Teh (Oxford and DeepMind)
Mert Gurbuzbalaban (Rutgers University)
More from the Same Authors
-
2021 : Continual Learning via Function-Space Variational Inference: A Unifying View »
Tim G. J. Rudner · Freddie Bickford Smith · Qixuan Feng · Yee-Whye Teh · Yarin Gal -
2022 : Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations »
Cong Lu · Philip Ball · Tim G. J Rudner · Jack Parker-Holder · Michael A Osborne · Yee-Whye Teh -
2023 : Synthetic Experience Replay »
Cong Lu · Philip Ball · Yee-Whye Teh · Jack Parker-Holder -
2023 Poster: Modality-Agnostic Variational Compression of Implicit Neural Representations »
Jonathan Richard Schwarz · Jihoon Tack · Yee-Whye Teh · Jaeho Lee · Jinwoo Shin -
2023 Poster: Learning Instance-Specific Augmentations by Capturing Local Invariances »
Ning Miao · Tom Rainforth · Emile Mathieu · Yann Dubois · Yee-Whye Teh · Adam Foster · Hyunjik Kim -
2023 Poster: Drug Discovery under Covariate Shift with Domain-Informed Prior Distributions over Functions »
Leo Klarner · Tim G. J. Rudner · Michael Reutlinger · Torsten Schindler · Garrett Morris · Charlotte Deane · Yee-Whye Teh -
2022 Poster: Continual Learning via Sequential Function-Space Variational Inference »
Tim G. J Rudner · Freddie Bickford Smith · QIXUAN FENG · Yee-Whye Teh · Yarin Gal -
2022 Spotlight: Continual Learning via Sequential Function-Space Variational Inference »
Tim G. J Rudner · Freddie Bickford Smith · QIXUAN FENG · Yee-Whye Teh · Yarin Gal -
2021 : Continual Learning via Function-Space Variational Inference: A Unifying View »
Yarin Gal · Yee-Whye Teh · Qixuan Feng · Freddie Bickford Smith · Tim G. J. Rudner -
2021 Poster: Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes »
Peter Holderrieth · Michael Hutchinson · Yee-Whye Teh -
2021 Spotlight: Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes »
Peter Holderrieth · Michael Hutchinson · Yee-Whye Teh -
2021 Test Of Time: Bayesian Learning via Stochastic Gradient Langevin Dynamics »
Yee Teh · Max Welling -
2021 Poster: Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections »
Alexander D Camuto · Xiaoyu Wang · Lingjiong Zhu · Christopher Holmes · Mert Gurbuzbalaban · Umut Simsekli -
2021 Spotlight: Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections »
Alexander D Camuto · Xiaoyu Wang · Lingjiong Zhu · Christopher Holmes · Mert Gurbuzbalaban · Umut Simsekli -
2021 Poster: The Heavy-Tail Phenomenon in SGD »
Mert Gurbuzbalaban · Umut Simsekli · Lingjiong Zhu -
2021 Spotlight: The Heavy-Tail Phenomenon in SGD »
Mert Gurbuzbalaban · Umut Simsekli · Lingjiong Zhu -
2021 Poster: LieTransformer: Equivariant Self-Attention for Lie Groups »
Michael Hutchinson · Charline Le Lan · Sheheryar Zaidi · Emilien Dupont · Yee-Whye Teh · Hyunjik Kim -
2021 Spotlight: LieTransformer: Equivariant Self-Attention for Lie Groups »
Michael Hutchinson · Charline Le Lan · Sheheryar Zaidi · Emilien Dupont · Yee-Whye Teh · Hyunjik Kim -
2020 Poster: MetaFun: Meta-Learning with Iterative Functional Updates »
Jin Xu · Jean-Francois Ton · Hyunjik Kim · Adam Kosiorek · Yee-Whye Teh -
2020 Poster: Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support »
Yuan Zhou · Hongseok Yang · Yee-Whye Teh · Tom Rainforth -
2020 Poster: Uncertainty Estimation Using a Single Deep Deterministic Neural Network »
Joost van Amersfoort · Lewis Smith · Yee-Whye Teh · Yarin Gal -
2019 Poster: Non-Asymptotic Analysis of Fractional Langevin Monte Carlo for Non-Convex Optimization »
Thanh Huy Nguyen · Umut Simsekli · Gaël RICHARD -
2019 Poster: A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks »
Umut Simsekli · Levent Sagun · Mert Gurbuzbalaban -
2019 Poster: Sliced-Wasserstein Flows: Nonparametric Generative Modeling via Optimal Transport and Diffusions »
Antoine Liutkus · Umut Simsekli · Szymon Majewski · Alain Durmus · Fabian-Robert Stöter -
2019 Oral: A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks »
Umut Simsekli · Levent Sagun · Mert Gurbuzbalaban -
2019 Oral: Hybrid Models with Deep and Invertible Features »
Eric Nalisnick · Akihiro Matsukawa · Yee-Whye Teh · Dilan Gorur · Balaji Lakshminarayanan -
2019 Oral: Non-Asymptotic Analysis of Fractional Langevin Monte Carlo for Non-Convex Optimization »
Thanh Huy Nguyen · Umut Simsekli · Gaël RICHARD -
2019 Oral: Sliced-Wasserstein Flows: Nonparametric Generative Modeling via Optimal Transport and Diffusions »
Antoine Liutkus · Umut Simsekli · Szymon Majewski · Alain Durmus · Fabian-Robert Stöter -
2019 Poster: Disentangling Disentanglement in Variational Autoencoders »
Emile Mathieu · Tom Rainforth · N Siddharth · Yee-Whye Teh -
2019 Poster: Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances »
Bugra Can · Mert Gurbuzbalaban · Lingjiong Zhu -
2019 Poster: Hybrid Models with Deep and Invertible Features »
Eric Nalisnick · Akihiro Matsukawa · Yee-Whye Teh · Dilan Gorur · Balaji Lakshminarayanan -
2019 Oral: Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances »
Bugra Can · Mert Gurbuzbalaban · Lingjiong Zhu -
2019 Oral: Disentangling Disentanglement in Variational Autoencoders »
Emile Mathieu · Tom Rainforth · N Siddharth · Yee-Whye Teh -
2019 Poster: Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks »
Juho Lee · Yoonho Lee · Jungtaek Kim · Adam Kosiorek · Seungjin Choi · Yee-Whye Teh -
2019 Oral: Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks »
Juho Lee · Yoonho Lee · Jungtaek Kim · Adam Kosiorek · Seungjin Choi · Yee-Whye Teh -
2018 Poster: Progress & Compress: A scalable framework for continual learning »
Jonathan Richard Schwarz · Wojciech Czarnecki · Jelena Luketina · Agnieszka Grabska-Barwinska · Yee Teh · Razvan Pascanu · Raia Hadsell -
2018 Poster: Mix & Match - Agent Curricula for Reinforcement Learning »
Wojciech Czarnecki · Siddhant Jayakumar · Max Jaderberg · Leonard Hasenclever · Yee Teh · Nicolas Heess · Simon Osindero · Razvan Pascanu -
2018 Oral: Progress & Compress: A scalable framework for continual learning »
Jonathan Richard Schwarz · Wojciech Czarnecki · Jelena Luketina · Agnieszka Grabska-Barwinska · Yee Teh · Razvan Pascanu · Raia Hadsell -
2018 Oral: Mix & Match - Agent Curricula for Reinforcement Learning »
Wojciech Czarnecki · Siddhant Jayakumar · Max Jaderberg · Leonard Hasenclever · Yee Teh · Nicolas Heess · Simon Osindero · Razvan Pascanu -
2018 Poster: Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization »
Umut Simsekli · Cagatay Yildiz · Thanh Huy Nguyen · Ali Taylan Cemgil · Gaël RICHARD -
2018 Oral: Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization »
Umut Simsekli · Cagatay Yildiz · Thanh Huy Nguyen · Ali Taylan Cemgil · Gaël RICHARD -
2018 Poster: Conditional Neural Processes »
Marta Garnelo · Dan Rosenbaum · Chris Maddison · Tiago Ramalho · David Saxton · Murray Shanahan · Yee Teh · Danilo J. Rezende · S. M. Ali Eslami -
2018 Poster: Tighter Variational Bounds are Not Necessarily Better »
Tom Rainforth · Adam Kosiorek · Tuan Anh Le · Chris Maddison · Maximilian Igl · Frank Wood · Yee-Whye Teh -
2018 Oral: Tighter Variational Bounds are Not Necessarily Better »
Tom Rainforth · Adam Kosiorek · Tuan Anh Le · Chris Maddison · Maximilian Igl · Frank Wood · Yee-Whye Teh -
2018 Oral: Conditional Neural Processes »
Marta Garnelo · Dan Rosenbaum · Chris Maddison · Tiago Ramalho · David Saxton · Murray Shanahan · Yee Teh · Danilo J. Rezende · S. M. Ali Eslami -
2017 Poster: Fractional Langevin Monte Carlo: Exploring Levy Driven Stochastic Differential Equations for MCMC »
Umut Simsekli -
2017 Talk: Fractional Langevin Monte Carlo: Exploring Levy Driven Stochastic Differential Equations for MCMC »
Umut Simsekli