Skip to yearly menu bar Skip to main content


Oral
in
Affinity Workshop: LatinX in AI (LXAI) Workshop

Omega: Optimistic EMA Gradients

Juan Ramirez · Rohan Sukumaran · Quentin Bertrand · Gauthier Gidel


Abstract:

Stochastic min-max optimization has gained interest in the machine learning community with the advancements in GANs and adversarial training. Although game optimization is fairly well understood in the deterministic setting, some issues persist in the stochastic regime. Recent work has shown that stochastic gradient descent-ascent methods such as the optimistic gradient are highly sensitive to noise or can fail to converge. Although alternative strategies exist, they can be prohibitively expensive. We introduce Omega, a method with optimistic-like updates that mitigates the impact of noise by incorporating an EMA of historic gradients in its update rule. We also explore a variation of this algorithm that incorporates momentum. Although we do not provide convergence guarantees, our experiments on stochastic games show that Omega outperforms the optimistic gradient method when applied to linear players.

Chat is not available.