Spotlight
Towards Tight Bounds on the Sample Complexity of Average-reward MDPs
Yujia Jin · Aaron Sidford
Abstract:
We prove new upper and lower bounds for sample complexity of finding an -optimal policy of an infinite-horizon average-reward Markov decision process (MDP) given access to a generative model. When the mixing time of the probability transition matrix of all policies is at most , we provide an algorithm that solves the problem using (oblivious) samples per state-action pair. Further, we provide a lower bound showing that a linear dependence on is necessary in the worst case for any algorithm which computes oblivious samples. We obtain our results by establishing connections between infinite-horizon average-reward MDPs and discounted MDPs of possible further utility.
Chat is not available.