Timezone: »
While agents trained by Reinforcement Learning (RL) can solve increasingly challenging tasks directly from visual observations, generalizing learned skills to novel environments remains very challenging. Extensive use of data augmentation is a promising technique for improving generalization in RL, but it is often found to decrease sample efficiency and can even lead to divergence. In this paper, we investigate causes of instability when using data augmentation in common off-policy RL algorithms. We identify two problems, both rooted in high-variance Q-targets. Based on our findings, we propose a simple yet effective technique for stabilizing this class of algorithms under augmentation. We perform extensive empirical evaluation of image-based RL using both ConvNets and Vision Transformers (ViT) on a family of benchmarks based on DeepMind Control Suite, as well as in robotic manipulation tasks. Our method greatly improves stability and sample efficiency of ConvNets under augmentation, and achieves generalization results competitive with state-of-the-art methods for image-based RL. We further show that our method scales to RL with ViT-based architectures, and that data augmentation may be especially important in this setting. Code and videos: https://nicklashansen.github.io/SVEA
Author Information
Nicklas Hansen (University of California, San Diego)
Hao Su (UCSD)
Xiaolong Wang (UCSD)
More from the Same Authors
-
2021 : Disentangled Attention as Intrinsic Regularization for Bimanual Multi-Object Manipulation »
Minghao Zhang · Pingcheng Jian · Yi Wu · Harry (Huazhe) Xu · Xiaolong Wang -
2021 : Learning Vision-Guided Quadrupedal Locomotionwith Cross-Modal Transformers »
Ruihan Yang · Minghao Zhang · Nicklas Hansen · Harry (Huazhe) Xu · Xiaolong Wang -
2023 Poster: The photo-sketch correspondence problem: a new benchmark and a self-supervised approach »
Xuanchen Lu · Xiaolong Wang · Judith E. Fan -
2023 Poster: Abstract-to-Executable Trajectory Translation for One-Shot Task Generalization »
Stone Tao · Xiaochen Li · Tongzhou Mu · Zhiao Huang · Yuzhe Qin · Hao Su -
2023 Poster: On Pre-Training for Visuo-Motor Control: Revisiting a Learning-from-Scratch Baseline »
Nicklas Hansen · Zhecheng Yuan · Yanjie Ze · Tongzhou Mu · Aravind Rajeswaran · Hao Su · Huazhe Xu · Xiaolong Wang -
2023 Poster: MonoNeRF: Learning Generalizable NeRFs from Monocular Videos without Camera Pose »
Yang Fu · Ishan Misra · Xiaolong Wang -
2023 Poster: Reparameterized Policy Learning for Multimodal Trajectory Optimization »
Zhiao Huang · Litian Liang · Zhan Ling · Xuanlin Li · Chuang Gan · Hao Su -
2023 Oral: Reparameterized Policy Learning for Multimodal Trajectory Optimization »
Zhiao Huang · Litian Liang · Zhan Ling · Xuanlin Li · Chuang Gan · Hao Su -
2022 Poster: Temporal Difference Learning for Model Predictive Control »
Nicklas Hansen · Hao Su · Xiaolong Wang -
2022 Spotlight: Temporal Difference Learning for Model Predictive Control »
Nicklas Hansen · Hao Su · Xiaolong Wang -
2022 Poster: Improving Policy Optimization with Generalist-Specialist Learning »
Zhiwei Jia · Xuanlin Li · Zhan Ling · Shuang Liu · Yiran Wu · Hao Su -
2022 Spotlight: Improving Policy Optimization with Generalist-Specialist Learning »
Zhiwei Jia · Xuanlin Li · Zhan Ling · Shuang Liu · Yiran Wu · Hao Su -
2021 Poster: Compositional Video Synthesis with Action Graphs »
Amir Bar · Roi Herzig · Xiaolong Wang · Anna Rohrbach · Gal Chechik · Trevor Darrell · Amir Globerson -
2021 Spotlight: Compositional Video Synthesis with Action Graphs »
Amir Bar · Roi Herzig · Xiaolong Wang · Anna Rohrbach · Gal Chechik · Trevor Darrell · Amir Globerson -
2020 Poster: Information-Theoretic Local Minima Characterization and Regularization »
Zhiwei Jia · Hao Su -
2020 Poster: Deep Isometric Learning for Visual Recognition »
Haozhi Qi · Chong You · Xiaolong Wang · Yi Ma · Jitendra Malik