Timezone: »
Real world applications such as economics and policy making often involve solving multi-agent games with two unique features: (1) The agents are inherently \emph{asymmetric} and partitioned into leaders and followers; (2) The agents have different reward functions, thus the game is \emph{general-sum}. The majority of existing results in this field focuses on either symmetric solution concepts (e.g. Nash equilibrium) or zero-sum games. It remains vastly open how to learn the \emph{Stackelberg equilibrium}---an asymmetric analog of the Nash equilibrium---in general-sum games efficiently from samples.
This paper initiates the theoretical study of sample-efficient learning of the Stackelberg equilibrium, in the bandit feedback setting where we only observe noisy samples of the reward. We consider three representative two-player general-sum games: bandit games, bandit-reinforcement learning (bandit-RL) games, and linear bandit games. In all these games, we identify a fundamental gap between the exact value of the Stackelberg equilibrium and its estimated version using finitely many noisy samples, which can not be closed information-theoretically regardless of the algorithm. We then establish sharp positive results on sample-efficient learning of Stackelberg equilibrium with value optimal up to the gap identified above, with matching lower bounds in the dependency on the gap, error tolerance, and the size of the action spaces. Overall, our results unveil unique challenges in learning Stackelberg equilibria under noisy bandit feedback, which we hope could shed light on future research on this topic.
Author Information
Yu Bai (Salesforce Research)
Chi Jin (Princeton University)
Huan Wang (Salesforce Research)
Caiming Xiong (Salesforce)
More from the Same Authors
-
2021 : Near-Optimal Offline Reinforcement Learning via Double Variance Reduction »
Ming Yin · Yu Bai · Yu-Xiang Wang -
2021 : Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning »
Tengyang Xie · Nan Jiang · Huan Wang · Caiming Xiong · Yu Bai -
2021 : The Power of Exploiter: Provable Multi-Agent RL in Large State Spaces »
Chi Jin · Qinghua Liu · Tiancheng Yu -
2021 : Bellman Eluder Dimension: New Rich Classes of RL Problems, and Sample-Efficient Algorithms »
Chi Jin · Qinghua Liu · Sobhan Miryoosefi -
2023 : Is RLHF More Difficult than Standard RL? »
Chi Jin -
2023 Poster: Efficient displacement convex optimization with particle gradient descent »
Hadi Daneshmand · Jason Lee · Chi Jin -
2022 Poster: A Simple Reward-free Approach to Constrained Reinforcement Learning »
Sobhan Miryoosefi · Chi Jin -
2022 Spotlight: A Simple Reward-free Approach to Constrained Reinforcement Learning »
Sobhan Miryoosefi · Chi Jin -
2022 Poster: BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation »
Junnan Li · DONGXU LI · Caiming Xiong · Steven Hoi -
2022 Poster: The Power of Exploiter: Provable Multi-Agent RL in Large State Spaces »
Chi Jin · Qinghua Liu · Tiancheng Yu -
2022 Poster: Learning Markov Games with Adversarial Opponents: Efficient Algorithms and Fundamental Limits »
Qinghua Liu · Yuanhao Wang · Chi Jin -
2022 Poster: Near-Optimal Learning of Extensive-Form Games with Imperfect Information »
Yu Bai · Chi Jin · Song Mei · Tiancheng Yu -
2022 Spotlight: BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation »
Junnan Li · DONGXU LI · Caiming Xiong · Steven Hoi -
2022 Spotlight: Near-Optimal Learning of Extensive-Form Games with Imperfect Information »
Yu Bai · Chi Jin · Song Mei · Tiancheng Yu -
2022 Oral: Learning Markov Games with Adversarial Opponents: Efficient Algorithms and Fundamental Limits »
Qinghua Liu · Yuanhao Wang · Chi Jin -
2022 Spotlight: The Power of Exploiter: Provable Multi-Agent RL in Large State Spaces »
Chi Jin · Qinghua Liu · Tiancheng Yu -
2022 Poster: Provable Reinforcement Learning with a Short-Term Memory »
Yonathan Efroni · Chi Jin · Akshay Krishnamurthy · Sobhan Miryoosefi -
2022 Spotlight: Provable Reinforcement Learning with a Short-Term Memory »
Yonathan Efroni · Chi Jin · Akshay Krishnamurthy · Sobhan Miryoosefi -
2021 : Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games »
Yu Bai · Chi Jin · Huan Wang · Caiming Xiong -
2021 Poster: Near-Optimal Representation Learning for Linear Bandits and Linear RL »
Jiachen Hu · Xiaoyu Chen · Chi Jin · Lihong Li · Liwei Wang -
2021 Poster: Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization »
Stanislaw Jastrzebski · Devansh Arpit · Oliver Astrand · Giancarlo Kerg · Huan Wang · Caiming Xiong · Richard Socher · Kyunghyun Cho · Krzysztof J Geras -
2021 Poster: A Sharp Analysis of Model-based Reinforcement Learning with Self-Play »
Qinghua Liu · Tiancheng Yu · Yu Bai · Chi Jin -
2021 Poster: Provable Meta-Learning of Linear Representations »
Nilesh Tripuraneni · Chi Jin · Michael Jordan -
2021 Poster: How Important is the Train-Validation Split in Meta-Learning? »
Yu Bai · Minshuo Chen · Pan Zhou · Tuo Zhao · Jason Lee · Sham Kakade · Huan Wang · Caiming Xiong -
2021 Spotlight: Provable Meta-Learning of Linear Representations »
Nilesh Tripuraneni · Chi Jin · Michael Jordan -
2021 Spotlight: A Sharp Analysis of Model-based Reinforcement Learning with Self-Play »
Qinghua Liu · Tiancheng Yu · Yu Bai · Chi Jin -
2021 Spotlight: Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization »
Stanislaw Jastrzebski · Devansh Arpit · Oliver Astrand · Giancarlo Kerg · Huan Wang · Caiming Xiong · Richard Socher · Kyunghyun Cho · Krzysztof J Geras -
2021 Spotlight: How Important is the Train-Validation Split in Meta-Learning? »
Yu Bai · Minshuo Chen · Pan Zhou · Tuo Zhao · Jason Lee · Sham Kakade · Huan Wang · Caiming Xiong -
2021 Spotlight: Near-Optimal Representation Learning for Linear Bandits and Linear RL »
Jiachen Hu · Xiaoyu Chen · Chi Jin · Lihong Li · Liwei Wang -
2021 Poster: Don’t Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification »
Yu Bai · Song Mei · Huan Wang · Caiming Xiong -
2021 Poster: Exact Gap between Generalization Error and Uniform Convergence in Random Feature Models »
Zitong Yang · Yu Bai · Song Mei -
2021 Poster: Risk Bounds and Rademacher Complexity in Batch Reinforcement Learning »
Yaqi Duan · Chi Jin · Zhiyuan Li -
2021 Spotlight: Risk Bounds and Rademacher Complexity in Batch Reinforcement Learning »
Yaqi Duan · Chi Jin · Zhiyuan Li -
2021 Spotlight: Exact Gap between Generalization Error and Uniform Convergence in Random Feature Models »
Zitong Yang · Yu Bai · Song Mei -
2021 Spotlight: Don’t Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification »
Yu Bai · Song Mei · Huan Wang · Caiming Xiong -
2020 Poster: On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems »
Tianyi Lin · Chi Jin · Michael Jordan -
2020 Poster: Reward-Free Exploration for Reinforcement Learning »
Chi Jin · Akshay Krishnamurthy · Max Simchowitz · Tiancheng Yu -
2020 Poster: Provable Self-Play Algorithms for Competitive Reinforcement Learning »
Yu Bai · Chi Jin -
2020 Poster: Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills »
Victor Campos · Alexander Trott · Caiming Xiong · Richard Socher · Xavier Giro-i-Nieto · Jordi Torres -
2020 Poster: Learning Adversarial Markov Decision Processes with Bandit Feedback and Unknown Transition »
Chi Jin · Tiancheng Jin · Haipeng Luo · Suvrit Sra · Tiancheng Yu -
2020 Poster: Provably Efficient Exploration in Policy Optimization »
Qi Cai · Zhuoran Yang · Chi Jin · Zhaoran Wang -
2020 Poster: What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? »
Chi Jin · Praneeth Netrapalli · Michael Jordan -
2019 Poster: Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting »
Xilai Li · Yingbo Zhou · Tianfu Wu · Richard Socher · Caiming Xiong -
2019 Poster: Taming MAML: Efficient unbiased meta-reinforcement learning »
Hao Liu · Richard Socher · Caiming Xiong -
2019 Poster: On the Generalization Gap in Reparameterizable Reinforcement Learning »
Huan Wang · Stephan Zheng · Caiming Xiong · Richard Socher -
2019 Oral: Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting »
Xilai Li · Yingbo Zhou · Tianfu Wu · Richard Socher · Caiming Xiong -
2019 Oral: On the Generalization Gap in Reparameterizable Reinforcement Learning »
Huan Wang · Stephan Zheng · Caiming Xiong · Richard Socher -
2019 Oral: Taming MAML: Efficient unbiased meta-reinforcement learning »
Hao Liu · Richard Socher · Caiming Xiong