Timezone: »
We study a new paradigm for sequential decision making, called offline policy learning from observations (PLfO). Offline PLfO aims to learn policies using datasets with substandard qualities: 1) only a subset of trajectories is labeled with rewards, 2) labeled trajectories may not contain actions, 3) labeled trajectories may not be of high quality, and 4) the data may not have full coverage. Such imperfection is common in real-world learning scenarios, and offline PLfO encompasses many existing offline learning setups, including offline imitation learning (IL), offline IL from observations (ILfO), and offline reinforcement learning (RL). In this work, we present a generic approach to offline PLfO, called Modality-agnostic Adversarial Hypothesis Adaptation for Learning from Observations (MAHALO). Built upon the pessimism concept in offline RL, MAHALO optimizes the policy using a performance lower bound that accounts for uncertainty due to the dataset's insufficient coverage. We implement this idea by adversarially training data-consistent critic and reward functions, which forces the learned policy to be robust to data deficiency. We show that MAHALO consistently outperforms or matches specialized algorithms across a variety of offline PLfO tasks in theory and experiments. Our code is available at https://github.com/AnqiLi/mahalo.
Author Information
Anqi Li (University of Washington)
Byron Boots (University of Washington)
Ching-An Cheng (Microsoft Research)
More from the Same Authors
-
2023 : Survival Instinct in Offline Reinforcement Learning and Implicit Human Bias in Data »
Anqi Li · Dipendra Misra · Andrey Kolobov · Ching-An Cheng -
2023 Poster: Provable Reset-free Reinforcement Learning by No-Regret Reduction »
Hoai-An Nguyen · Ching-An Cheng -
2023 Poster: Hindsight Learning for MDPs with Exogenous Inputs »
Sean R. Sinclair · Felipe Vieira Frujeri · Ching-An Cheng · Luke Marshall · Hugo Barbalho · Jingling Li · Jennifer Neville · Ishai Menache · Adith Swaminathan -
2022 Poster: Adversarially Trained Actor Critic for Offline Reinforcement Learning »
Ching-An Cheng · Tengyang Xie · Nan Jiang · Alekh Agarwal -
2022 Oral: Adversarially Trained Actor Critic for Offline Reinforcement Learning »
Ching-An Cheng · Tengyang Xie · Nan Jiang · Alekh Agarwal -
2021 Poster: Safe Reinforcement Learning Using Advantage-Based Intervention »
Nolan Wagener · Byron Boots · Ching-An Cheng -
2021 Spotlight: Safe Reinforcement Learning Using Advantage-Based Intervention »
Nolan Wagener · Byron Boots · Ching-An Cheng