Timezone: »

What can online reinforcement learning with function approximation benefit from general coverage conditions?
Fanghui Liu · Luca Viano · Volkan Cevher

Wed Jul 26 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #737
In online reinforcement learning (RL), instead of employing standard structural assumptions on Markov decision processes (MDPs), using a certain coverage condition (original from offline RL) is enough to ensure sample-efficient guarantees (Xie et al. 2023). In this work, we focus on this new direction by digging more possible and general coverage conditions, and study the potential and the utility of them in efficient online RL. We identify more concepts, including the $L^p$ variant of concentrability, the density ratio realizability, and trade-off on the partial/rest coverage condition, that can be also beneficial to sample-efficient online RL, achieving improved regret bound. Furthermore, if exploratory offline data are used, under our coverage conditions, both statistically and computationally efficient guarantees can be achieved for online RL. Besides, even though the MDP structure is given, e.g., linear MDP, we elucidate that, good coverage conditions are still beneficial to obtain faster regret bound beyond $\widetilde{\mathcal{O}}(\sqrt{T})$ and even a logarithmic order regret. These results provide a good justification for the usage of general coverage conditions in efficient online RL.

Author Information

Fanghui Liu (EPFL)

l am currently a postdoc researcher in EPFL, and my research interest includes statistical machine learning, mainly on kernel methods and learning theory.

Luca Viano (EPFL)
Volkan Cevher (EPFL)

More from the Same Authors