Timezone: »
Recent studies reveal that a well-trained deep reinforcement learning (RL) policy can be particularly vulnerable to adversarial perturbations on input observations. Therefore, it is crucial to train RL agents that are robust against any attacks with a bounded budget. Existing robust training methods in deep RL either treat correlated steps separately, ignoring the robustness of long-term reward, or train the agents and RL-based attacker together, doubling the computational burden and sample complexity of the training process. In this work, we propose a strong and efficient robust training framework for RL, named Worst-case-aware Robust RL (WocaR-RL), that directly estimates and optimizes the worst-case reward of a policy under bounded attacks without requiring extra samples for learning an attacker. Experiments on multiple environments show that WocaR-RL achieves state-of-the-art performance under various strong attacks, and obtains significantly higher training efficiency than prior state-of-the-art robust training methods.
Author Information
Yongyuan Liang (Sun Yat-sen University)
Yanchao Sun (University of Maryland, College Park)
Ruijie Zheng (University of Maryland, College Park)
Furong Huang (University of Maryland)
More from the Same Authors
-
2022 : Everyone Matters: Customizing the Dynamics of Decision Boundary for Adversarial Robustness »
Yuancheng Xu · Yanchao Sun · Furong Huang -
2022 : Live in the Moment: Learning Dynamics Model Adapted to Evolving Policy »
xiyao wang · Wichayaporn Wongkamjan · Furong Huang -
2022 : Certifiably Robust Multi-Agent Reinforcement Learning against Adversarial Communication »
Yanchao Sun · Ruijie Zheng · Parisa Hassanzadeh · Yongyuan Liang · Soheil Feizi · Sumitra Ganesh · Furong Huang -
2022 Poster: Scaling-up Diverse Orthogonal Convolutional Networks by a Paraunitary Framework »
Jiahao Su · Wonmin Byeon · Furong Huang -
2022 Spotlight: Scaling-up Diverse Orthogonal Convolutional Networks by a Paraunitary Framework »
Jiahao Su · Wonmin Byeon · Furong Huang