Timezone: »
We consider the problem of reinforcement learning when provided with (1) a baseline control policy and (2) a set of constraints that the learner must satisfy. The baseline policy can arise from demonstration data or a teacher agent and may provide useful cues for learning, but it might also be sub-optimal for the task at hand, and is not guaranteed to satisfy the specified constraints, which might encode safety, fairness or other application-specific requirements. In order to safely learn from baseline policies, we propose an iterative policy optimization algorithm that alternates between maximizing expected return on the task, minimizing distance to the baseline policy, and projecting the policy onto the constraint-satisfying set. We analyze our algorithm theoretically and provide a finite-time convergence guarantee. In our experiments on five different control tasks, our algorithm consistently outperforms several state-of-the-art baselines, achieving 10 times fewer constraint violations and 40% higher reward on average.
Author Information
Jimmy (Tsung-Yen) Yang (Princeton University)
My research interests lie in the intersection of machine learning, reinforcement learning, and natural language processing. Specifically, my Ph.D. work focuses on building autonomous systems that acquire knowledge by interacting with the world, and providing provable safety and performance guarantees for autonomous systems during deployment and learning.
Justinian Rosca (Siemens Corp.)
Karthik Narasimhan (Princeton)
Peter Ramadge (Princeton)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Accelerating Safe Reinforcement Learning with Constraint-mismatched Baseline Policies »
Wed. Jul 21st 04:00 -- 06:00 PM Room None
More from the Same Authors
-
2022 Poster: Training Discrete Deep Generative Models via Gapped Straight-Through Estimator »
Ting-Han Fan · Ta-Chung Chi · Alexander Rudnicky · Peter Ramadge -
2022 Spotlight: Training Discrete Deep Generative Models via Gapped Straight-Through Estimator »
Ting-Han Fan · Ta-Chung Chi · Alexander Rudnicky · Peter Ramadge -
2021 Poster: Grounding Language to Entities and Dynamics for Generalization in Reinforcement Learning »
Austin W. Hanjie · Victor Zhong · Karthik Narasimhan -
2021 Spotlight: Grounding Language to Entities and Dynamics for Generalization in Reinforcement Learning »
Austin W. Hanjie · Victor Zhong · Karthik Narasimhan -
2020 : Invited Talk: Karthik Narasimhan »
Karthik Narasimhan -
2020 Poster: Calibration, Entropy Rates, and Memory in Language Models »
Mark Braverman · Xinyi Chen · Sham Kakade · Karthik Narasimhan · Cyril Zhang · Yi Zhang -
2019 Poster: Task-Agnostic Dynamics Priors for Deep Reinforcement Learning »
Yilun Du · Karthik Narasimhan -
2019 Oral: Task-Agnostic Dynamics Priors for Deep Reinforcement Learning »
Yilun Du · Karthik Narasimhan