Timezone: »

Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning
Yue Wu · Shuangfei Zhai · Nitish Srivastava · Joshua M Susskind · Jian Zhang · Ruslan Salakhutdinov · Hanlin Goh

Tue Jul 20 06:20 PM -- 06:25 PM (PDT) @

Offline Reinforcement Learning promises to learn effective policies from previously-collected, static datasets without the need for exploration. However, existing Q-learning and actor-critic based off-policy RL algorithms fail when bootstrapping from out-of-distribution (OOD) actions or states. We hypothesize that a key missing ingredient from the existing methods is a proper treatment of uncertainty in the offline setting. We propose Uncertainty Weighted Actor-Critic (UWAC), an algorithm that detects OOD state-action pairs and down-weights their contribution in the training objectives accordingly. Implementation-wise, we adopt a practical and effective dropout-based uncertainty estimation method that introduces very little overhead over existing RL algorithms. Empirically, we observe that UWAC substantially improves model stability during training. In addition, UWAC out-performs existing offline RL methods on a variety of competitive tasks, and achieves significant performance gains over the state-of-the-art baseline on datasets with sparse demonstrations collected from human experts.

Author Information

Yue Wu (Carnegie Mellon University)
Shuangfei Zhai (Apple)
Nitish Srivastava (Apple)
Joshua M Susskind (Apple, Inc.)
Jian Zhang (Apple Inc.)

AI and Robotics. AI Research & Autonomous System Technologies at Apple

Ruslan Salakhutdinov (Carnegie Mellen University)
Hanlin Goh (Apple)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors