Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Reinforcement Learning Theory

Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games

Yu Bai · Chi Jin · Huan Wang · Caiming Xiong


Abstract:

Real world applications such as economics and policy making often involve solving multi-agent games with two unique features: (1) The agents are inherently \emph{asymmetric} and partitioned into leaders and followers; (2) The agents have different reward functions, thus the game is \emph{general-sum}. The majority of existing results in this field focuses on either symmetric solution concepts (e.g. Nash equilibrium) or zero-sum games. It remains vastly open how to learn the \emph{Stackelberg equilibrium}---an asymmetric analog of the Nash equilibrium---in general-sum games efficiently from samples.

This paper initiates the theoretical study of sample-efficient learning of the Stackelberg equilibrium, in the bandit feedback setting where we only observe noisy samples of the reward. We consider three representative two-player general-sum games: bandit games, bandit-reinforcement learning (bandit-RL) games, and linear bandit games. In all these games, we identify a fundamental gap between the exact value of the Stackelberg equilibrium and its estimated version using finitely many noisy samples, which can not be closed information-theoretically regardless of the algorithm. We then establish sharp positive results on sample-efficient learning of Stackelberg equilibrium with value optimal up to the gap identified above, with matching lower bounds in the dependency on the gap, error tolerance, and the size of the action spaces. Overall, our results unveil unique challenges in learning Stackelberg equilibria under noisy bandit feedback, which we hope could shed light on future research on this topic.

Chat is not available.