Timezone: »

 
Poster
Can Autonomous Vehicles Identify, Recover From, and Adapt to Distribution Shifts?
Angelos Filos · Panagiotis Tigas · Rowan McAllister · Nicholas Rhinehart · Sergey Levine · Yarin Gal

Wed Jul 15 10:00 AM -- 10:45 AM & Wed Jul 15 11:00 PM -- 11:45 PM (PDT) @ None #None

Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment, typically leading to arbitrary deductions and poorly-informed decisions. In principle, detection of and adaptation to OOD scenes can mitigate their adverse effects. In this paper, we highlight the limitations of current approaches to novel driving scenes and propose an epistemic uncertainty-aware planning method, called \emph{robust imitative planning} (RIP). Our method can detect and recover from some distribution shifts, reducing the overconfident and catastrophic extrapolations in OOD scenes. If the model's uncertainty is too great to suggest a safe course of action, the model can instead query the expert driver for feedback, enabling sample-efficient online adaptation, a variant of our method we term \emph{adaptive robust imitative planning} (AdaRIP). Our methods outperform current state-of-the-art approaches in the nuScenes \emph{prediction} challenge, but since no benchmark evaluating OOD detection and adaption currently exists to assess \emph{control}, we introduce an autonomous car novel-scene benchmark, \texttt{CARNOVEL}, to evaluate the robustness of driving agents to a suite of tasks with distribution shifts.

Author Information

Angelos Filos (University of Oxford)
Panagiotis Tigas (Oxford University)
Rowan McAllister (UC Berkeley)
Nicholas Rhinehart (Carnegie Mellon University)

Nick Rhinehart is a Ph.D. student at Carnegie Mellon University, focusing on understanding, forecasting, and controlling the behavior of agents through computer vision and machine learning. He is particularly interested in systems that learn to reason about the future. He has researched with Sergey Levine at UC Berkeley, Paul Vernaza at N.E.C. Labs, and Drew Bagnell at Uber ATG. His First-Person Forecasting work received the Marr Prize (Best Paper) Honorable Mention Award at ICCV 2017. Nick co-organized Tutorial on Inverse RL for Computer Vision at CVPR 2018 and is the primary organizer of ICML 2019 Workshop on Imitation, Intent, and Interaction.

Sergey Levine (UC Berkeley)
Sergey Levine

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.

Yarin Gal (University of Oxford)

More from the Same Authors