Timezone: »
Reinforcement learning (RL) agents need to be robust to variations in safety-critical environments. While system identification methods provide a way to infer the variation from online experience, they can fail in settings where fast identification is not possible. Another dominant approach is robust RL which produces a policy that can handle worst-case scenarios, but these methods are generally designed to achieve robustness to a single uncertainty set that must be specified at train time. Towards a more general solution, we formulate the multi-set robustness problem to learn a policy robust to different perturbation sets. We then design an algorithm that enjoys the benefits of both system identification and robust RL: it reduces uncertainty where possible given a few interactions, but can still act robustly with respect to the remaining uncertainty. On a diverse set of control tasks, our approach demonstrates improved worst-case performance on new environments compared to prior methods based on system identification and on robust RL alone.
Author Information
Annie Xie (Stanford University)
Shagun Sodhani (Facebook AI Research)
Chelsea Finn (Stanford)
Chelsea Finn is an Assistant Professor in Computer Science and Electrical Engineering at Stanford University. Finn's research interests lie in the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction. To this end, her work has included deep learning algorithms for concurrently learning visual perception and control in robotic manipulation skills, inverse reinforcement methods for learning reward functions underlying behavior, and meta-learning algorithms that can enable fast, few-shot adaptation in both visual perception and deep reinforcement learning. Finn received her Bachelor's degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley. Her research has been recognized through the ACM doctoral dissertation award, the Microsoft Research Faculty Fellowship, the C.V. Ramamoorthy Distinguished Research Award, and the MIT Technology Review 35 under 35 Award, and her work has been covered by various media outlets, including the New York Times, Wired, and Bloomberg. Throughout her career, she has sought to increase the representation of underrepresented minorities within CS and AI by developing an AI outreach camp at Berkeley for underprivileged high school students, a mentoring program for underrepresented undergraduates across four universities, and leading efforts within the WiML and Berkeley WiCSE communities of women researchers.
Joelle Pineau (Facebook)
Amy Zhang (FAIR / UC Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Robust Policy Learning over Multiple Uncertainty Sets »
Thu. Jul 21st 08:45 -- 08:50 PM Room Room 307
More from the Same Authors
-
2020 : Learning Invariant Representations for Reinforcement Learning without Reconstruction »
Amy Zhang -
2020 : Multi-Task Reinforcement Learning as a Hidden-Parameter Block MDP »
Amy Zhang -
2021 : Multi-Task Offline Reinforcement Learning with Conservative Data Sharing »
Tianhe (Kevin) Yu · Aviral Kumar · Yevgen Chebotar · Karol Hausman · Sergey Levine · Chelsea Finn -
2021 : Visual Adversarial Imitation Learning using Variational Models »
Rafael Rafailov · Tianhe (Kevin) Yu · Aravind Rajeswaran · Chelsea Finn -
2021 : Intrinsic Control of Variational Beliefs in Dynamic Partially-Observed Visual Environments »
Nicholas Rhinehart · Jenny Wang · Glen Berseth · John Co-Reyes · Danijar Hafner · Chelsea Finn · Sergey Levine -
2021 : The Reflective Explorer: Online Meta-Exploration from Offline Data in Visual Tasks with Sparse Rewards »
Rafael Rafailov · Varun Kumar · Tianhe (Kevin) Yu · Avi Singh · mariano phielipp · Chelsea Finn -
2021 : Multi-Task Offline Reinforcement Learning with Conservative Data Sharing »
Tianhe (Kevin) Yu · Aviral Kumar · Yevgen Chebotar · Karol Hausman · Sergey Levine · Chelsea Finn -
2022 : Self-Destructing Models: Increasing the Costs of Harmful Dual Uses in Foundation Models »
Eric Mitchell · Peter Henderson · Christopher Manning · Dan Jurafsky · Chelsea Finn -
2022 : Giving Complex Feedback in Online Student Learning with Meta-Exploration »
Evan Liu · Moritz Stephan · Allen Nie · Chris Piech · Emma Brunskill · Chelsea Finn -
2022 : Policy Architectures for Compositional Generalization in Control »
Allan Zhou · Vikash Kumar · Chelsea Finn · Aravind Rajeswaran -
2022 : Diversify and Disambiguate: Learning from Underspecified Data »
Yoonho Lee · Huaxiu Yao · Chelsea Finn -
2022 : Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time »
Huaxiu Yao · Caroline Choi · Yoonho Lee · Pang Wei Koh · Chelsea Finn -
2022 : Giving Feedback on Interactive Student Programs with Meta-Exploration »
Evan Liu · Moritz Stephan · Allen Nie · Chris Piech · Emma Brunskill · Chelsea Finn -
2022 : When to Ask for Help: Proactive Interventions in Autonomous Reinforcement Learning »
Annie Xie · Fahim Tajwar · Archit Sharma · Chelsea Finn -
2022 : You Only Live Once: Single-Life Reinforcement Learning via Learned Reward Shaping »
Annie Chen · Archit Sharma · Sergey Levine · Chelsea Finn -
2022 : Diversify and Disambiguate: Learning from Underspecified Data »
Yoonho Lee · Huaxiu Yao · Chelsea Finn -
2022 : Self-Destructing Models: Increasing the Costs of Harmful Dual Uses in Foundation Models »
Eric Mitchell · Peter Henderson · Christopher Manning · Dan Jurafsky · Chelsea Finn -
2023 : In-Context Decision-Making from Supervised Pretraining »
Jonathan Lee · Annie Xie · Aldo Pacchiano · Yash Chandak · Chelsea Finn · Ofir Nachum · Emma Brunskill -
2023 : Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware »
Tony Zhao · Vikash Kumar · Sergey Levine · Chelsea Finn -
2023 : Direct Preference Optimization: Your Language Model is Secretly a Reward Model »
Rafael Rafailov · Archit Sharma · Eric Mitchell · Stefano Ermon · Christopher Manning · Chelsea Finn -
2023 : Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning »
Mitsuhiko Nakamoto · Yuexiang Zhai · Anikait Singh · Max Sobol Mark · Yi Ma · Chelsea Finn · Aviral Kumar · Sergey Levine -
2023 : Keynote I: Detecting and Adapting to Distribution Shift »
Chelsea Finn -
2023 : Conditional Bisimulation for Generalization in Reinforcement Learning »
Anuj Mahajan · Amy Zhang -
2023 Poster: Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning »
Tongzhou Wang · Antonio Torralba · Phillip Isola · Amy Zhang -
2023 Oral: DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature »
Eric Mitchell · Yoonho Lee · Alexander Khazatsky · Christopher Manning · Chelsea Finn -
2023 Poster: Simple Embodied Language Learning as a Byproduct of Meta-Reinforcement Learning »
Evan Liu · Sahaana Suri · Tong Mu · Allan Zhou · Chelsea Finn -
2023 Poster: DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature »
Eric Mitchell · Yoonho Lee · Alexander Khazatsky · Christopher Manning · Chelsea Finn -
2023 Poster: LIV: Language-Image Representations and Rewards for Robotic Control »
Yecheng Jason Ma · Vikash Kumar · Amy Zhang · Osbert Bastani · Dinesh Jayaraman -
2022 : Giving Complex Feedback in Online Student Learning with Meta-Exploration »
Evan Liu · Moritz Stephan · Allen Nie · Chris Piech · Emma Brunskill · Chelsea Finn -
2022 Workshop: Responsible Decision Making in Dynamic Environments »
Virginie Do · Thorsten Joachims · Alessandro Lazaric · Joelle Pineau · Matteo Pirotta · Harsh Satija · Nicolas Usunier -
2022 Workshop: The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward »
Huaxiu Yao · Hugo Larochelle · Percy Liang · Colin Raffel · Jian Tang · Ying WEI · Saining Xie · Eric Xing · Chelsea Finn -
2022 : Panel discussion »
Steffen Schneider · Aleksander Madry · Alexei Efros · Chelsea Finn · Soheil Feizi -
2022 : Invited talks 3, Q/A, Amy, Rich and Liting »
Liting Sun · Amy Zhang · Richard Zemel -
2022 : Q/A: Chelsea Finn »
Chelsea Finn -
2022 : Invited Speaker: Chelsea Finn »
Chelsea Finn -
2022 : Invited talks 3, Amy Zhang, Rich Zemel and Liting Sun »
Amy Zhang · Richard Zemel · Liting Sun -
2022 : Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time »
Huaxiu Yao · Caroline Choi · Yoonho Lee · Pang Wei Koh · Chelsea Finn -
2022 : Invited Talk 3: Chelsea Finn »
Chelsea Finn -
2022 Poster: Online Decision Transformer »
Qinqing Zheng · Amy Zhang · Aditya Grover -
2022 Poster: How to Leverage Unlabeled Data in Offline Reinforcement Learning »
Tianhe (Kevin) Yu · Aviral Kumar · Yevgen Chebotar · Karol Hausman · Chelsea Finn · Sergey Levine -
2022 Poster: Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning »
Philippe Hansen-Estruch · Amy Zhang · Ashvin Nair · Patrick Yin · Sergey Levine -
2022 Poster: Memory-Based Model Editing at Scale »
Eric Mitchell · Charles Lin · Antoine Bosselut · Christopher Manning · Chelsea Finn -
2022 Spotlight: Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning »
Philippe Hansen-Estruch · Amy Zhang · Ashvin Nair · Patrick Yin · Sergey Levine -
2022 Spotlight: How to Leverage Unlabeled Data in Offline Reinforcement Learning »
Tianhe (Kevin) Yu · Aviral Kumar · Yevgen Chebotar · Karol Hausman · Chelsea Finn · Sergey Levine -
2022 Oral: Online Decision Transformer »
Qinqing Zheng · Amy Zhang · Aditya Grover -
2022 Spotlight: Memory-Based Model Editing at Scale »
Eric Mitchell · Charles Lin · Antoine Bosselut · Christopher Manning · Chelsea Finn -
2022 Poster: The Neural Race Reduction: Dynamics of Abstraction in Gated Networks »
Andrew Saxe · Shagun Sodhani · Sam Lewallen -
2022 Poster: Denoised MDPs: Learning World Models Better Than the World Itself »
Tongzhou Wang · Simon Du · Antonio Torralba · Phillip Isola · Amy Zhang · Yuandong Tian -
2022 Poster: Improving Out-of-Distribution Robustness via Selective Augmentation »
Huaxiu Yao · Yu Wang · Sai Li · Linjun Zhang · Weixin Liang · James Zou · Chelsea Finn -
2022 Spotlight: The Neural Race Reduction: Dynamics of Abstraction in Gated Networks »
Andrew Saxe · Shagun Sodhani · Sam Lewallen -
2022 Spotlight: Denoised MDPs: Learning World Models Better Than the World Itself »
Tongzhou Wang · Simon Du · Antonio Torralba · Phillip Isola · Amy Zhang · Yuandong Tian -
2022 Spotlight: Improving Out-of-Distribution Robustness via Selective Augmentation »
Huaxiu Yao · Yu Wang · Sai Li · Linjun Zhang · Weixin Liang · James Zou · Chelsea Finn -
2022 Poster: A State-Distribution Matching Approach to Non-Episodic Reinforcement Learning »
Archit Sharma · Rehaan Ahmad · Chelsea Finn -
2022 Poster: Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations »
Michael Zhang · Nimit Sohoni · Hongyang Zhang · Chelsea Finn · Christopher Re -
2022 Oral: Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations »
Michael Zhang · Nimit Sohoni · Hongyang Zhang · Chelsea Finn · Christopher Re -
2022 Spotlight: A State-Distribution Matching Approach to Non-Episodic Reinforcement Learning »
Archit Sharma · Rehaan Ahmad · Chelsea Finn -
2021 : Live Panel Discussion »
Thomas Dietterich · Chelsea Finn · Kamalika Chaudhuri · Yarin Gal · Uri Shalit -
2021 Poster: Offline Meta-Reinforcement Learning with Advantage Weighting »
Eric Mitchell · Rafael Rafailov · Xue Bin Peng · Sergey Levine · Chelsea Finn -
2021 Poster: WILDS: A Benchmark of in-the-Wild Distribution Shifts »
Pang Wei Koh · Shiori Sagawa · Henrik Marklund · Sang Michael Xie · Marvin Zhang · Akshay Balsubramani · Weihua Hu · Michihiro Yasunaga · Richard Lanas Phillips · Irena Gao · Tony Lee · Etienne David · Ian Stavness · Wei Guo · Berton Earnshaw · Imran Haque · Sara Beery · Jure Leskovec · Anshul Kundaje · Emma Pierson · Sergey Levine · Chelsea Finn · Percy Liang -
2021 Spotlight: Offline Meta-Reinforcement Learning with Advantage Weighting »
Eric Mitchell · Rafael Rafailov · Xue Bin Peng · Sergey Levine · Chelsea Finn -
2021 Oral: WILDS: A Benchmark of in-the-Wild Distribution Shifts »
Pang Wei Koh · Shiori Sagawa · Henrik Marklund · Sang Michael Xie · Marvin Zhang · Akshay Balsubramani · Weihua Hu · Michihiro Yasunaga · Richard Lanas Phillips · Irena Gao · Tony Lee · Etienne David · Ian Stavness · Wei Guo · Berton Earnshaw · Imran Haque · Sara Beery · Jure Leskovec · Anshul Kundaje · Emma Pierson · Sergey Levine · Chelsea Finn · Percy Liang -
2021 Poster: Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices »
Evan Liu · Aditi Raghunathan · Percy Liang · Chelsea Finn -
2021 Spotlight: Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices »
Evan Liu · Aditi Raghunathan · Percy Liang · Chelsea Finn -
2021 Poster: Just Train Twice: Improving Group Robustness without Training Group Information »
Evan Liu · Behzad Haghgoo · Annie Chen · Aditi Raghunathan · Pang Wei Koh · Shiori Sagawa · Percy Liang · Chelsea Finn -
2021 Poster: Multi-Task Reinforcement Learning with Context-based Representations »
Shagun Sodhani · Amy Zhang · Joelle Pineau -
2021 Oral: Just Train Twice: Improving Group Robustness without Training Group Information »
Evan Liu · Behzad Haghgoo · Annie Chen · Aditi Raghunathan · Pang Wei Koh · Shiori Sagawa · Percy Liang · Chelsea Finn -
2021 Spotlight: Multi-Task Reinforcement Learning with Context-based Representations »
Shagun Sodhani · Amy Zhang · Joelle Pineau -
2021 Poster: Deep Reinforcement Learning amidst Continual Structured Non-Stationarity »
Annie Xie · James Harrison · Chelsea Finn -
2021 Spotlight: Deep Reinforcement Learning amidst Continual Structured Non-Stationarity »
Annie Xie · James Harrison · Chelsea Finn -
2020 : Invited Talk 11: Prof. Chelsea Finn from Stanford University »
Chelsea Finn -
2020 : Concluding Remarks »
Sarath Chandar · Shagun Sodhani -
2020 : Q&A by Rich Sutton »
Richard Sutton · Shagun Sodhani · Sarath Chandar -
2020 : Contributed Talk: Deep Reinforcement Learning amidst Lifelong Non-Stationarity »
Annie Xie -
2020 : Q&A with Irina Rish »
Irina Rish · Shagun Sodhani · Sarath Chandar -
2020 : Q&A with Jürgen Schmidhuber »
Jürgen Schmidhuber · Shagun Sodhani · Sarath Chandar -
2020 : Q&A with Partha Pratim Talukdar »
Partha Talukdar · Shagun Sodhani · Sarath Chandar -
2020 : Q&A with Katja Hoffman »
Katja Hofmann · Luisa Zintgraf · Rika Antonova · Sarath Chandar · Shagun Sodhani -
2020 Workshop: 4th Lifelong Learning Workshop »
Shagun Sodhani · Sarath Chandar · Balaraman Ravindran · Doina Precup -
2020 : Opening Comments »
Sarath Chandar · Shagun Sodhani -
2020 : Paper spotlight: Learning Invariant Representations for Reinforcement Learning without Reconstruction »
Amy Zhang -
2020 Poster: Goal-Aware Prediction: Learning to Model What Matters »
Suraj Nair · Silvio Savarese · Chelsea Finn -
2020 Poster: On the Expressivity of Neural Networks for Deep Reinforcement Learning »
Kefan Dong · Yuping Luo · Tianhe (Kevin) Yu · Chelsea Finn · Tengyu Ma -
2020 Poster: Cautious Adaptation For Reinforcement Learning in Safety-Critical Settings »
Jesse Zhang · Brian Cheung · Chelsea Finn · Sergey Levine · Dinesh Jayaraman -
2020 Poster: Invariant Causal Prediction for Block MDPs »
Amy Zhang · Clare Lyle · Shagun Sodhani · Angelos Filos · Marta Kwiatkowska · Joelle Pineau · Yarin Gal · Doina Precup -
2019 Poster: TarMAC: Targeted Multi-Agent Communication »
Abhishek Das · Theophile Gervet · Joshua Romoff · Dhruv Batra · Devi Parikh · Michael Rabbat · Joelle Pineau -
2019 Oral: TarMAC: Targeted Multi-Agent Communication »
Abhishek Das · Theophile Gervet · Joshua Romoff · Dhruv Batra · Devi Parikh · Michael Rabbat · Joelle Pineau -
2018 Poster: Composable Planning with Attributes »
Amy Zhang · Sainbayar Sukhbaatar · Adam Lerer · Arthur Szlam · Facebook Rob Fergus -
2018 Oral: Composable Planning with Attributes »
Amy Zhang · Sainbayar Sukhbaatar · Adam Lerer · Arthur Szlam · Facebook Rob Fergus