Timezone: »
Poster
Online Learning with Local Permutations and Delayed Feedback
Liran Szlak · Ohad Shamir
We propose an Online Learning with Local Permutations (OLLP) setting, in which the learner is allowed to slightly permute the \emph{order} of the loss functions generated by an adversary. On one hand, this models natural situations where the exact order of the learner's responses is not crucial, and on the other hand, might allow better learning and regret performance, by mitigating highly adversarial loss sequences. Also, with random permutations, this can be seen as a setting interpolating between adversarial and stochastic losses. In this paper, we consider the
applicability of this setting to convex online learning with delayed feedback, in which the feedback on the prediction made in round $t$ arrives with some delay $\tau$. With such delayed feedback, the best possible regret bound is well-known to be $O(\sqrt{\tau T})$. We prove that by being able to permute losses by a distance of at most $M$ (for $M\geq \tau$), the regret can be improved to $O(\sqrt{T}(1+\sqrt{\tau^2/M}))$, using a Mirror-Descent based algorithm which can be applied for both Euclidean and non-Euclidean geometries. We also prove a lower bound, showing that for $M<\tau/3$, it is impossible to improve the standard $O(\sqrt{\tau T})$ regret bound by more than constant factors. Finally, we provide some experiments validating the performance of our algorithm.
Author Information
Liran Szlak (Weizmann Institute of Science)
Ohad Shamir (Weizmann Institute of Science)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Talk: Online Learning with Local Permutations and Delayed Feedback »
Mon Aug 7th 12:48 -- 01:06 AM Room C4.1
More from the Same Authors
-
2020 Poster: The Complexity of Finding Stationary Points with Stochastic Gradient Descent »
Yoel Drori · Ohad Shamir -
2020 Poster: Proving the Lottery Ticket Hypothesis: Pruning is All You Need »
Eran Malach · Gilad Yehudai · Shai Shalev-Schwartz · Ohad Shamir -
2020 Poster: Is Local SGD Better than Minibatch SGD? »
Blake Woodworth · Kumar Kshitij Patel · Sebastian Stich · Zhen Dai · Brian Bullins · Brendan McMahan · Ohad Shamir · Nati Srebro -
2018 Poster: Spurious Local Minima are Common in Two-Layer ReLU Neural Networks »
Itay Safran · Ohad Shamir -
2018 Oral: Spurious Local Minima are Common in Two-Layer ReLU Neural Networks »
Itay Safran · Ohad Shamir -
2017 Poster: Oracle Complexity of Second-Order Methods for Finite-Sum Problems »
Yossi Arjevani · Ohad Shamir -
2017 Poster: Communication-efficient Algorithms for Distributed Stochastic Principal Component Analysis »
Dan Garber · Ohad Shamir · Nati Srebro -
2017 Poster: Depth-Width Tradeoffs in Approximating Natural Functions With Neural Networks »
Itay Safran · Ohad Shamir -
2017 Poster: Failures of Gradient-Based Deep Learning »
Shaked Shammah · Shai Shalev-Shwartz · Ohad Shamir -
2017 Talk: Depth-Width Tradeoffs in Approximating Natural Functions With Neural Networks »
Itay Safran · Ohad Shamir -
2017 Talk: Failures of Gradient-Based Deep Learning »
Shaked Shammah · Shai Shalev-Shwartz · Ohad Shamir -
2017 Talk: Oracle Complexity of Second-Order Methods for Finite-Sum Problems »
Yossi Arjevani · Ohad Shamir -
2017 Talk: Communication-efficient Algorithms for Distributed Stochastic Principal Component Analysis »
Dan Garber · Ohad Shamir · Nati Srebro