Timezone: »
We study the problem of repeated play in a zero-sum game in which the payoff matrix may change, in a possibly adversarial fashion, on each round; we call these Online Matrix Games. Finding the Nash Equilibrium (NE) of a two player zero-sum game is core to many problems in statistics, optimization, and economics, and for a fixed game matrix this can be easily reduced to solving a linear program. But when the payoff matrix evolves over time our goal is to find a sequential algorithm that can compete with, in a certain sense, the NE of the long-term-averaged payoff matrix. We design an algorithm with small NE regret--that is, we ensure that the long-term payoff of both players is close to minimax optimum in hindsight. Our algorithm achieves near-optimal dependence with respect to the number of rounds and depends poly-logarithmically on the number of available actions of the players. Additionally, we show that the naive reduction, where each player simply minimizes its own regret, fails to achieve the stated objective regardless of which algorithm is used. Lastly, we consider the so-called bandit setting, where the feedback is significantly limited, and we provide an algorithm with small NE regret using one-point estimates of each payoff matrix.
Author Information
Adrian Rivera Cardoso (Georgia Institute of Technology)
Jacob Abernethy (Georgia Institute of Technology)
He Wang (Georgia Institute of Technology)
Huan Xu (Georgia Tech)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Competing Against Nash Equilibria in Adversarially Changing Zero-Sum Games »
Thu. Jun 13th 04:25 -- 04:30 PM Room Room 102
More from the Same Authors
-
2020 : Contributed Talk: Bridging Truthfulness and Corruption-Robustness in Multi-Armed Bandit Mechanisms »
Jacob Abernethy · Bhuvesh Kumar · Thodoris Lykouris · Yinglun Xu -
2021 : How does Over-Parametrization Lead to Acceleration for Learning a Single Teacher Neuron with Quadratic Activation? »
Jun-Kun Wang · Jacob Abernethy -
2022 : ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2023 : Randomized Quantization is All You Need for Differential Privacy in Federated Learning »
Yeojoon Youn · Zihao Hu · Juba Ziani · Jacob Abernethy -
2022 : ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2022 Poster: ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2022 Poster: Active Sampling for Min-Max Fairness »
Jacob Abernethy · Pranjal Awasthi · Matthäus Kleindessner · Jamie Morgenstern · Chris Russell · Jie Zhang -
2022 Spotlight: ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2022 Spotlight: Active Sampling for Min-Max Fairness »
Jacob Abernethy · Pranjal Awasthi · Matthäus Kleindessner · Jamie Morgenstern · Chris Russell · Jie Zhang -
2021 Poster: A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network »
Jun-Kun Wang · Chi-Heng Lin · Jacob Abernethy -
2021 Spotlight: A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network »
Jun-Kun Wang · Chi-Heng Lin · Jacob Abernethy -
2019 Poster: Nonlinear Distributional Gradient Temporal-Difference Learning »
chao qu · Shie Mannor · Huan Xu -
2019 Oral: Nonlinear Distributional Gradient Temporal-Difference Learning »
chao qu · Shie Mannor · Huan Xu -
2018 Poster: Non-convex Conditional Gradient Sliding »
chao qu · Yan Li · Huan Xu -
2018 Oral: Non-convex Conditional Gradient Sliding »
chao qu · Yan Li · Huan Xu -
2017 Poster: Fake News Mitigation via Point Process Based Intervention »
Mehrdad Farajtabar · Jiachen Yang · Xiaojing Ye · Huan Xu · Rakshit Trivedi · Elias Khalil · Shuang Li · Le Song · Hongyuan Zha -
2017 Talk: Fake News Mitigation via Point Process Based Intervention »
Mehrdad Farajtabar · Jiachen Yang · Xiaojing Ye · Huan Xu · Rakshit Trivedi · Elias Khalil · Shuang Li · Le Song · Hongyuan Zha