Timezone: »
Oral
Characterization of Convex Objective Functions and Optimal Expected Convergence Rates for SGD
Marten van Dijk · Lam Nguyen · PHUONG_HA NGUYEN · Dzung Phan
We study Stochastic Gradient Descent (SGD) with diminishing step sizes for convex objective functions. We introduce a definitional framework and theory that defines and characterizes a core property, called curvature, of convex objective functions. In terms of curvature we can derive a new inequality that can be used to compute an optimal sequence of diminishing step sizes by solving a differential equation. Our exact solutions confirm known results in literature and allows us to fully characterize a new regularizer with its corresponding expected convergence rates.
Author Information
Marten van Dijk (University of Connecticut)
Lam Nguyen (IBM Research, Thomas J. Watson Research Center)
PHUONG_HA NGUYEN (University of Connecticut)
Dzung Phan (IBM T.J. Watson Research Center)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Characterization of Convex Objective Functions and Optimal Expected Convergence Rates for SGD »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #193
More from the Same Authors
-
2022 : Fast Convergence for Unstable Reinforcement Learning Problems by Logarithmic Mapping »
Wang Zhang · Lam Nguyen · Subhro Das · Alexandre Megretsky · Luca Daniel · Tsui-Wei Weng -
2023 Poster: ConCerNet: A Contrastive Learning Based Framework for Automated Conservation Law Discovery and Trustworthy Dynamical System Prediction »
Wang Zhang · Lily Weng · Subhro Das · Alexandre Megretsky · Luca Daniel · Lam Nguyen -
2022 Poster: Nesterov Accelerated Shuffling Gradient Method for Convex Optimization »
Trang Tran · Katya Scheinberg · Lam Nguyen -
2022 Spotlight: Nesterov Accelerated Shuffling Gradient Method for Convex Optimization »
Trang Tran · Katya Scheinberg · Lam Nguyen -
2021 Poster: SMG: A Shuffling Gradient-Based Method with Momentum »
Trang Tran · Lam Nguyen · Quoc Tran-Dinh -
2021 Spotlight: SMG: A Shuffling Gradient-Based Method with Momentum »
Trang Tran · Lam Nguyen · Quoc Tran-Dinh -
2020 Poster: Stochastic Gauss-Newton Algorithms for Nonconvex Compositional Optimization »
Quoc Tran-Dinh · Nhan H Pham · Lam Nguyen -
2019 Poster: PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach »
Tsui-Wei Weng · Pin-Yu Chen · Lam Nguyen · Mark Squillante · Akhilan Boopathy · Ivan Oseledets · Luca Daniel -
2019 Oral: PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach »
Tsui-Wei Weng · Pin-Yu Chen · Lam Nguyen · Mark Squillante · Akhilan Boopathy · Ivan Oseledets · Luca Daniel -
2018 Poster: SGD and Hogwild! Convergence Without the Bounded Gradients Assumption »
Lam Nguyen · PHUONG_HA NGUYEN · Marten van Dijk · Peter Richtarik · Katya Scheinberg · Martin Takac -
2018 Oral: SGD and Hogwild! Convergence Without the Bounded Gradients Assumption »
Lam Nguyen · PHUONG_HA NGUYEN · Marten van Dijk · Peter Richtarik · Katya Scheinberg · Martin Takac -
2017 Poster: SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient »
Lam Nguyen · Jie Liu · Katya Scheinberg · Martin Takac -
2017 Talk: SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient »
Lam Nguyen · Jie Liu · Katya Scheinberg · Martin Takac