Timezone: »

Improved Regret for Efficient Online Reinforcement Learning with Linear Function Approximation
Uri Sherman · Tomer Koren · Yishay Mansour

Wed Jul 26 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #624
We study reinforcement learning with linear function approximation and adversarially changing cost functions, a setup that has mostly been considered under simplifying assumptions such as full information feedback or exploratory conditions. We present a computationally efficient policy optimization algorithm for the challenging general setting of unknown dynamics and bandit feedback, featuring a combination of mirror-descent and least squares policy evaluation in an auxiliary MDP used to compute exploration bonuses. Our algorithm obtains an $\widetilde O(K^{6/7})$ regret bound, improving significantly over previous state-of-the-art of $\widetilde O (K^{14/15})$ in this setting. In addition, we present a version of the same algorithm under the assumption a simulator of the environment is available to the learner (but otherwise no exploratory assumptions are made), and prove it obtains state-of-the-art regret of $\widetilde O (K^{2/3})$.

Author Information

Uri Sherman (Tel Aviv University)
Tomer Koren (Tel Aviv University and Google)
Yishay Mansour (Google and Tel Aviv University)

More from the Same Authors