Timezone: »

 
Principal-Driven Reward Design and Agent Policy Alignment via Bilevel-RL
Souradip Chakraborty · Amrit Bedi · Alec Koppel · Furong Huang · Mengdi Wang
Event URL: https://openreview.net/forum?id=Crs7NjBNeF »

In reinforcement learning (RL), a reward function is often assumed at the outset of a policy optimization procedure. Learning in such a fixed reward paradigm in RL can neglect important policy optimization considerations, such as state space coverage and safety. Moreover, it can fail to encompass broader impacts in terms of social welfare, sustainability, or market stability, potentially leading to undesirable emergent behavior and potentially misaligned policy. To mathematically encapsulate the problem of aligning RL policy optimization with such externalities, we consider a bilevel optimization problem and connect it to a principal-agent framework, where the principal specifies the broader goals and constraints of the system at the upper level and the agent solves a Markov Decision Process (MDP) at the lower level. The upper-level deals with learning a suitable reward parametrization corresponding to the broader goals and the lower-level deals with learning the policy for the agent. We propose Principal driven Policy Alignment via Bilevel RL (PPA-BRL), which efficiently aligns the policy of the agent with the principal's goals. We explicitly analyzed the dependence of the principal's trajectory on the lower-level policy, and prove the convergence of PPA-BRL to the stationary point of the problem. We illuminate the merits of this framework in view of alignment with several examples spanning energy-efficient manipulation tasks, social welfare-based tax design, and cost-effective robotic navigation.

Author Information

Souradip Chakraborty (University of Maryland, College Park)
Amrit Bedi (University of Maryland, College Park)
Alec Koppel (JP Morgan Chase AI Research)

Bio: Alec Koppel is a Team Lead/VP at JP Morgan Chase AI Research since June 2022. Previously, he was a Research Scientist within Supply Chain Optimization Technologies (SCOT) at Amazon during 2021-2022, and prior to that, was a Research Scientist at the U.S. Army Research Laboratory in the Computational and Information Sciences Directorate from 2017-2021. He completed his Master's degree in Statistics and Doctorate in Electrical and Systems Engineering, both at the University of Pennsylvania (Penn) in August of 2017. Before coming to Penn, he completed his Master's degree in Systems Science and Mathematics and Bachelor's Degree in Mathematics, both at Washington University in St. Louis (WashU), Missouri. He is a recipient of the 2016 UPenn ESE Dept. Award for Exceptional Service, an awardee of the Science, Mathematics, and Research for Transformation (SMART) Scholarship, a co-author of Best Paper Finalist at the 2017 IEEE Asilomar Conference on Signals, Systems, and Computers, a finalist for the ARL Honorable Scientist Award 2019, an awardee of the 2020 ARL Director's Research Award Translational Research Challenge (DIRA-TRC), a 2020 Honorable Mention from the IEEE Robotics and Automation Letters, and mentor to the 2021 ARL Summer Symposium Best Project Awardee. His research interests are in optimization and machine learning. His academic work focuses on approximate Bayesian inference, reinforcement learning, and decentralized optimization. Applications include robotics and autonomy, sourcing and vendor selection, and financial markets.

Furong Huang (University of Maryland)
Furong Huang

Furong Huang is an Assistant Professor of the Department of Computer Science at University of Maryland. She works on statistical and trustworthy machine learning, reinforcement learning, graph neural networks, deep learning theory and federated learning with specialization in domain adaptation, algorithmic robustness and fairness. Furong is a recipient of the MIT Technology Review Innovators Under 35 Asia Pacific Award, the MLconf Industry Impact Research Award, the NSF CRII Award, the Adobe Faculty Research Award, three JP Morgan Faculty Research Awards and finalist of AI in Research - AI researcher of the year for Women in AI Awards North America. She received her Ph.D. in electrical engineering and computer science from UC Irvine in 2016, after which she spent one year as a postdoctoral researcher at Microsoft Research NYC.

Mengdi Wang (Princeton University)

More from the Same Authors