Timezone: »
Talk
Online Learning with Local Permutations and Delayed Feedback
Liran Szlak · Ohad Shamir
We propose an Online Learning with Local Permutations (OLLP) setting, in which the learner is allowed to slightly permute the \emph{order} of the loss functions generated by an adversary. On one hand, this models natural situations where the exact order of the learner's responses is not crucial, and on the other hand, might allow better learning and regret performance, by mitigating highly adversarial loss sequences. Also, with random permutations, this can be seen as a setting interpolating between adversarial and stochastic losses. In this paper, we consider the
applicability of this setting to convex online learning with delayed feedback, in which the feedback on the prediction made in round $t$ arrives with some delay $\tau$. With such delayed feedback, the best possible regret bound is wellknown to be $O(\sqrt{\tau T})$. We prove that by being able to permute losses by a distance of at most $M$ (for $M\geq \tau$), the regret can be improved to $O(\sqrt{T}(1+\sqrt{\tau^2/M}))$, using a MirrorDescent based algorithm which can be applied for both Euclidean and nonEuclidean geometries. We also prove a lower bound, showing that for $M<\tau/3$, it is impossible to improve the standard $O(\sqrt{\tau T})$ regret bound by more than constant factors. Finally, we provide some experiments validating the performance of our algorithm.
Author Information
Liran Szlak (Weizmann Institute of Science)
Ohad Shamir (Weizmann Institute of Science)
Related Events (a corresponding poster, oral, or spotlight)

2017 Poster: Online Learning with Local Permutations and Delayed Feedback »
Mon Aug 7th 08:30 AM  12:00 PM Room Gallery
More from the Same Authors

2020 Poster: The Complexity of Finding Stationary Points with Stochastic Gradient Descent »
Yoel Drori · Ohad Shamir 
2020 Poster: Proving the Lottery Ticket Hypothesis: Pruning is All You Need »
Eran Malach · Gilad Yehudai · Shai ShalevSchwartz · Ohad Shamir 
2020 Poster: Is Local SGD Better than Minibatch SGD? »
Blake Woodworth · Kumar Kshitij Patel · Sebastian Stich · Zhen Dai · Brian Bullins · Brendan McMahan · Ohad Shamir · Nati Srebro 
2018 Poster: Spurious Local Minima are Common in TwoLayer ReLU Neural Networks »
Itay Safran · Ohad Shamir 
2018 Oral: Spurious Local Minima are Common in TwoLayer ReLU Neural Networks »
Itay Safran · Ohad Shamir 
2017 Poster: Oracle Complexity of SecondOrder Methods for FiniteSum Problems »
Yossi Arjevani · Ohad Shamir 
2017 Poster: Communicationefficient Algorithms for Distributed Stochastic Principal Component Analysis »
Dan Garber · Ohad Shamir · Nati Srebro 
2017 Poster: DepthWidth Tradeoffs in Approximating Natural Functions With Neural Networks »
Itay Safran · Ohad Shamir 
2017 Poster: Failures of GradientBased Deep Learning »
Shaked Shammah · Shai ShalevShwartz · Ohad Shamir 
2017 Talk: DepthWidth Tradeoffs in Approximating Natural Functions With Neural Networks »
Itay Safran · Ohad Shamir 
2017 Talk: Failures of GradientBased Deep Learning »
Shaked Shammah · Shai ShalevShwartz · Ohad Shamir 
2017 Talk: Oracle Complexity of SecondOrder Methods for FiniteSum Problems »
Yossi Arjevani · Ohad Shamir 
2017 Talk: Communicationefficient Algorithms for Distributed Stochastic Principal Component Analysis »
Dan Garber · Ohad Shamir · Nati Srebro