Timezone: »

A Simple Reward-free Approach to Constrained Reinforcement Learning
Sobhan Miryoosefi · Chi Jin

Thu Jul 21 10:55 AM -- 11:00 AM (PDT) @ Room 301 - 303

In constrained reinforcement learning (RL), a learning agent seeks to not only optimize the overall reward but also satisfy the additional safety, diversity, or budget constraints. Consequently, existing constrained RL solutions require several new algorithmic ingredients that are notably different from standard RL. On the other hand, reward-free RL is independently developed in the unconstrained literature, which learns the transition dynamics without using the reward information, and thus naturally capable of addressing RL with multiple objectives under the common dynamics. This paper bridges reward-free RL and constrained RL. Particularly, we propose a simple meta-algorithm such that given any reward-free RL oracle, the approachability and constrained RL problems can be directly solved with negligible overheads in sample complexity. Utilizing the existing reward-free RL solvers, our framework provides sharp sample complexity results for constrained RL in the tabular MDP setting, matching the best existing results up to a factor of horizon dependence; our framework directly extends to a setting of tabular two-player Markov games, and gives a new result for constrained RL with linear function approximation.

Author Information

Sobhan Miryoosefi (Princeton University)
Chi Jin (Princeton University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors