Accelerated and Stable Convergence with Anchored Generalized Optimistic Method
Motahareh Sohrabi ⋅ Jianxin You ⋅ Simon Lacoste-Julien ⋅ Eduard Gorbunov ⋅ Gauthier Gidel
Abstract
We study first-order methods for solving monotone variational inequalities arising in min-max optimization. Classical approaches such as the extragradient method rely on two gradient queries per iteration, which limits their analysis and applicability in the online and stochastic settings. We propose a family of Generalized Optimistic Methods with Anchoring (GOMA), which combine two time-scale optimistic updates with an anchoring term inspired by Halpern iteration. In particular, we show that for monotone Lipschitz operators, GOMA achieves an accelerated last-iterate convergence rate of $\mathcal{O}(1/k^2)$ in the squared gradient norm which is optimal. We also show that in stochastic games where classical methods, such as the extragradient and optimistic method, fail, GOMA can converge. Theoretically, we show that it has a last-iterate convergence rate of $\mathcal{O}(1/\sqrt{k})$ for monotone Lipschitz operators in stochastic regimes with linearly increasing minibatches.
Successful Page Load