31 Results

Poster
Tue 7:00 Self-Concordant Analysis of Frank-Wolfe Algorithms
Pavel Dvurechenskii, Petr Ostroukhov, Kamil Safin, Shimrit Shtern, Mathias Staudigl
Poster
Tue 7:00 Semismooth Newton Algorithm for Efficient Projections onto $\ell_{1, \infty}$-norm Ball
Dejun Chu, Changshui Zhang, Shiliang Sun, Qing Tao
Poster
Tue 8:00 Closing the convergence gap of SGD without replacement
Shashank Rajput, Anant Gupta, Dimitris Papailiopoulos
Poster
Tue 8:00 Distributed Online Optimization over a Heterogeneous Network
Nima Eshraghi, Ben Liang
Poster
Tue 10:00 SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Jakkam Reddi, Sebastian Stich, Ananda Theertha Suresh
Poster
Tue 12:00 Stochastic Frank-Wolfe for Constrained Finite-Sum Minimization
GEOFFREY Negiar, Gideon Dresdner, Alicia Yi-Ting Tsai, Laurent El Ghaoui, Francesco Locatello, Robert Freund, Fabian Pedregosa
Poster
Tue 13:00 Inexact Tensor Methods with Dynamic Accuracies
Nikita Doikov, Yurii Nesterov
Poster
Tue 13:00 Optimal Randomized First-Order Methods for Least-Squares Problems
Jonathan Lacotte, Mert Pilanci
Poster
Tue 14:00 Debiased Sinkhorn barycenters
Hicham Janati, Marco Cuturi, Alexandre Gramfort
Poster
Tue 18:00 Hybrid Stochastic-Deterministic Minibatch Proximal Gradient: Less-Than-Single-Pass Optimization with Nearly Optimal Generalization
Pan Zhou, Xiao-Tong Yuan
Poster
Wed 5:00 An Accelerated DFO Algorithm for Finite-sum Convex Functions
Yuwen Chen, Antonio Orvieto, Aurelien Lucchi
Poster
Wed 5:00 On the Convergence of Nesterov's Accelerated Gradient Method in Stochastic Settings
Mido Assran, Mike Rabbat
Poster
Wed 10:00 Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-layer Networks
Mert Pilanci, Tolga Ergen
Poster
Wed 10:00 Continuous-time Lower Bounds for Gradient-based Algorithms
Michael Muehlebach, Michael Jordan
Poster
Wed 10:00 Efficiently Solving MDPs with Stochastic Mirror Descent
Yujia Jin, Aaron Sidford
Poster
Wed 11:00 Adaptive Gradient Descent without Descent
Yura Malitsky, Konstantin Mishchenko
Poster
Wed 12:00 On Convergence-Diagnostic based Step Sizes for Stochastic Gradient Descent
Scott Pesme, Aymeric Dieuleveut, Nicolas Flammarion
Poster
Wed 12:00 A new regret analysis for Adam-type algorithms
Ahmet Alacaoglu, Yura Malitsky, Panayotis Mertikopoulos, Volkan Cevher
Poster
Wed 13:00 Boosting Frank-Wolfe by Chasing Gradients
Cyrille W. Combettes, Sebastian Pokutta
Poster
Wed 14:00 Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Filip Hanzely, Dmitry Kovalev, Peter Richtarik
Poster
Thu 6:00 Almost Tune-Free Variance Reduction
Bingcong Li, Lingda Wang, Georgios B. Giannakis
Poster
Thu 6:00 Acceleration through spectral density estimation
Fabian Pedregosa, Damien Scieur
Poster
Thu 6:00 Lower Complexity Bounds for Finite-Sum Convex-Concave Minimax Optimization Problems
Guangzeng Xie, Luo Luo, yijiang lian, Zhihua Zhang
Poster
Thu 6:00 Universal Asymptotic Optimality of Polyak Momentum
Damien Scieur, Fabian Pedregosa
Poster
Thu 6:00 Stochastic Coordinate Minimization with Progressive Precision for Stochastic Convex Optimization
Sudeep Salgia, Qing Zhao, Sattar Vakili
Poster
Thu 7:00 Spectral Frank-Wolfe Algorithm: Strict Complementarity and Linear Convergence
Lijun Ding, Tom Fei, Qiantong Xu, Chengrun Yang
Poster
Thu 12:00 Conditional gradient methods for stochastically constrained convex minimization
Maria Vladarean, Ahmet Alacaoglu, Ya-Ping Hsieh, Volkan Cevher
Poster
Thu 12:00 Random extrapolation for primal-dual coordinate descent
Ahmet Alacaoglu, Olivier Fercoq, Volkan Cevher
Poster
Thu 13:00 Stochastic Subspace Cubic Newton Method
Filip Hanzely, Nikita Doikov, Yurii Nesterov, Peter Richtarik
Poster
Thu 14:00 Anderson Acceleration of Proximal Gradient Methods
Vien Mai, Mikael Johansson
Poster
Thu 15:00 Stochastic Optimization for Regularized Wasserstein Estimators
Marin Ballu, Quentin Berthet, Francis Bach