Timezone: »

Online Learning in Unknown Markov Games
Yi Tian · Yuanhao Wang · Tiancheng Yu · Suvrit Sra

Wed Jul 21 06:20 PM -- 06:25 PM (PDT) @ None
We study online learning in unknown Markov games, a problem that arises in episodic multi-agent reinforcement learning where the actions of the opponents are unobservable. We show that in this challenging setting, achieving sublinear regret against the best response in hindsight is statistically hard. We then consider a weaker notion of regret by competing with the \emph{minimax value} of the game, and present an algorithm that achieves a sublinear $\tilde{\mathcal{O}}(K^{2/3})$ regret after $K$ episodes. This is the first sublinear regret bound (to our knowledge) for online learning in unknown Markov games. Importantly, our regret bound is independent of the size of the opponents' action spaces. As a result, even when the opponents' actions are fully observable, our regret bound improves upon existing analysis (e.g., (Xie et al., 2020)) by an exponential factor in the number of opponents.

Author Information

Yi Tian (Massachusetts Institute of Technology)
Yuanhao Wang (Princeton University)
Tiancheng Yu (MIT)

I am a 2nd-year PhD student in MIT EECS, advised by Prof. Suvrit Sra. Before MIT, I was an undergraduate student in Tsinghua University, major in Electronic Engineering and minor in Statistics.

Suvrit Sra (MIT)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors