Timezone: »
Self-driving cars and advanced safety features present one of today’s greatest challenges and opportunities for Artificial Intelligence (AI). Despite billions of dollars of investments and encouraging progress under certain operational constraints, there are no driverless cars on public roads today without human safety drivers. Autonomous Driving research spans a wide spectrum, from modular architectures -- composed of hardcoded or independently learned sub-systems -- to end-to-end deep networks with a single model from sensors to controls. In any system, Machine Learning is a key component. However, there are formidable learning challenges due to safety constraints, the need for large-scale manual labeling, and the complex high dimensional structure of driving data, whether inputs (from cameras, HD maps, inertial measurement units, wheel encoders, LiDAR, radar, etc.) or predictions (e.g., world state representations, behavior models, trajectory forecasts, plans, controls). The goal of this workshop is to explore the frontier of learning approaches for safe, robust, and efficient Autonomous Driving (AD) at scale. The workshop will span both theoretical frameworks and practical issues especially in the area of deep learning.
Website: https://sites.google.com/view/aiad2020
Fri 5:00 a.m. - 5:10 a.m.
|
Open Remark 1
(
Talk
)
link »
Zoom webinar: https://us02web.zoom.us/j/83855151644?pwd=TGhQRXppZ1pKUmxURHYvU1RDbzBWUT09 |
Wei-Lun (Harry) Chao · Rowan McAllister · Li Erran Li · Sven Kreiss · Adrien Gaidon 🔗 |
Fri 5:10 a.m. - 5:40 a.m.
|
Invited Talk: Deep Direct Visual SLAM (Daniel Cremers)
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38930755/deep-direct-visual-slam Abstract: The reconstruction of our 3D world from moving cameras is among the central challenges in computer vision. I will present recent developments in camera-based reconstruction of the world. In particular, I will discuss direct methods for visual SLAM (simultaneous localization and mapping). These recover camera motion and 3D structure directly from brightness consistency thereby providing better performance in terms of precision and robustness compared to classical keypoint-based techniques. Moreover, I will demonstrate how we can leverage the predictive power of deep networks in order to significantly boost the performance of direct SLAM methods. The resulting methods allow us to track a single camera with a precision that is on par with state-of-the-art stereo-inertial odometry methods. Moreover, we can relocalize a moving vehicle with respect to a previously generated map despite significant changes in illumination and weather. Bio: Daniel Cremers received a PhD in Computer Science (2002) from the University of Mannheim, Germany. Subsequently he spent two years as a postdoctoral researcher at the University of California at Los Angeles (UCLA) and one year as a permanent researcher at Siemens Corporate Research in Princeton, NJ. From 2005 until 2009 he was associate professor at the University of Bonn. Since 2009 he holds the Chair of Computer Vision and Artificial Intelligence at the Technical University of Munich. His publications received numerous awards, including the 'Best Paper of the Year 2003' (Int. Pattern Recognition Society), the 'Olympus Award 2004' (German Soc. for Pattern Recognition) and the '2005 UCLA Chancellor's Award for Postdoctoral Research'. For pioneering research he received five grants from the European Research Council, including a Starting Grant, a Consolidator Grant and an Advanced Grant. In 2018 he organized the largest ever European Conference on Computer Vision in Munich. He is member of the Bavarian Academy of Sciences and Humanities. In December 2010 he was listed among "Germany's top 40 researchers below 40" (Capital). On March 1st 2016, Prof. Cremers received the Gottfried Wilhelm Leibniz Award, the biggest award in German academia. He is co-founder of several companies, most recently the high-tech startup Artisense. |
Daniel Cremers 🔗 |
Fri 5:40 a.m. - 5:50 a.m.
|
Q&A: Daniel Cremers
(
Q&A
)
link »
Zoom webinar: https://us02web.zoom.us/j/83855151644?pwd=TGhQRXppZ1pKUmxURHYvU1RDbzBWUT09 |
Daniel Cremers 🔗 |
Fri 5:50 a.m. - 6:20 a.m.
|
Invited Talk: Raster-based Motion Prediction for Safe Self-Driving (Nemanja Djuric)
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38930749/rasterbased-motion-prediction-for-safe-selfdriving Abstract: Motion prediction is a critical component of self-driving technology, tasked with inferring future behavior of traffic actors as well as modeling behavior uncertainty. In the talk we focus on this important problem, and discuss raster-based methods that have shown state-of-the-art performance. These approaches take top-down images of a surrounding area as their input, providing near-complete contextual information necessary to accurately predict traffic motion. We present a number of recently proposed models, and show how to develop methods that obey map and other physical constraints of the environment. Bio: Nemanja Djuric is a Staff Engineer and Tech Lead Manager at Uber ATG, for the past 5 years working on motion prediction, object detection, and other technologies supporting self-driving vehicles. Prior to ATG he worked as a research scientist at Yahoo Labs, which he joined after obtaining his PhD at Temple University. |
Nemanja Djuric 🔗 |
Fri 6:20 a.m. - 6:30 a.m.
|
Q&A: Nemanja Djuric
(
Q&A
)
link »
Zoom webinar: https://us02web.zoom.us/j/83855151644?pwd=TGhQRXppZ1pKUmxURHYvU1RDbzBWUT09 |
Nemanja Djuric 🔗 |
Fri 6:30 a.m. - 6:40 a.m.
|
Coffee break
|
🔗 |
Fri 6:40 a.m. - 7:10 a.m.
|
Invited Talk: Under the Radar: System-Level Self-Supervision for Radar Perception and Navigation (Ingmar Posner)
(
Talk
)
link »
Video: https://youtu.be/VhCNOuxNqpA Abstract: In providing long-range information and significant robustness to environmental conditions radar complements perfectly some of the more commonly used sensing modalities in autonomous driving. However, radar data is also notoriously difficult to work with. Significant, context-dependent sensing artefacts and noise characteristics make interpretation and use of this data a real challenge. In this talk I will describe some of the work done in the Applied AI Lab at Oxford in leveraging learning to enable radar-based perception and navigation. In particular, I will talk about how we use system-level self-supervision - the use of adjacent sensing or subsystems to derive a learning signal during training - in order to make radar data palatable during deployment. I will introduce work that explicitly accounts for the particular noise-characteristics of a radar in order to map from raw radar scans to occupancy grids; I will describe an approach to interpretable ego-motion estimation learning an inherent distraction suppression; and I will give an overview of how we can construct a fully fledged radar-based navigation system. Bio: Ingmar leads the Applied Artificial Intelligence Lab at Oxford University and is a founding director of the Oxford Robotics Institute. His goal is to enable robots to robustly and effectively operate in complex, real-world environments. His research is guided by a vision to create machines which constantly improve through experience. In doing so Ingmar's work explores a number of intellectual challenges at the heart of robot learning, such as unsupervised scene interpretation, action inference and machine introspection. All the while Ingmar’s research remains grounded in real-world robotics applications such as manipulation, autonomous driving, logistics and space exploration. In 2014 Ingmar co-founded Oxbotica, a multi-award winning provider of mobile autonomy software solutions. |
Ingmar Posner 🔗 |
Fri 7:10 a.m. - 7:20 a.m.
|
Q&A: Ingmar Posner
(
Q&A
)
link »
Zoom webinar: https://us02web.zoom.us/j/83855151644?pwd=TGhQRXppZ1pKUmxURHYvU1RDbzBWUT09 |
Ingmar Posner 🔗 |
Fri 7:20 a.m. - 7:50 a.m.
|
Invited Talk: Motion Prediction for Vulnerable Road Users (Dariu Gavrila)
(
Talk
)
link »
Video: https://youtu.be/qWKfhrhDtlU Abstract: Sensors are meanwhile very good at measuring 3D in the context of environment perception for self- driving vehicles. Scene labeling and object detection have also made big strides, mainly due to advances in deep learning. Time has now come to focus on the next frontier: modeling and anticipating the motion of road users. The potential benefits are large, such as earlier and more effective system reactions in dangerous traffic situations. To reap these benefits, however, it is necessary to use sophisticated predictive motion models based on intent-relevant (context) cues. In this talk, I give an overview of predictive motion models and intent-relevant cues with respect to the vulnerable road users (i.e. pedestrians, cyclists). In particular, I discuss the pros and cons of having these models handcrafted by an expert compared to learning them from data. I present results from a recent case study on cyclist path prediction involving a Dynamic Bayesian Network and a Recurrent Neural Network. Bio: Dariu M. Gavrila received the PhD degree in computer science from the University of Maryland at College Park, USA, in 1996. During 1997 - 2016, he was with Daimler R&D in Ulm, Germany, where he became a Distinguished Scientist. During 2003 - 2018, he was also professor at the University of Amsterdam, chairing the area of Intelligent Perception Systems (part time). Since 2016 he is head of the Intelligent Vehicles group at TU Delft, full time (www.intelligent-vehicles.org). Over the past 20 years, Prof. Gavrila has focused on visual systems for detecting humans and their activity, with application to intelligent vehicles, smart surveillance and social robotics. He led the multi-year pedestrian detection research effort at Daimler R&D, which was commercialized in the Mercedes-Benz S-, E-, and C-Class models (2013-2014). He now performs research on self-driving cars in complex urban environment and focusses on the anticipation of pedestrian and cyclist behavior. Prof. Gavrila is frequently cited in the scientific literature (Google Scholar: 13.000+ times) and he received the I/O 2007 Award from the Netherlands Organisation for Scientific Research (NWO) and the IEEE Intelligent Transportation Systems Application Award 2014 (as part of a Daimler team). He served as Area and Program Co-Chair at several conferences (IV, ICCV, ECCV, AVSS). His research was covered in various print and broadcast media, such as Wired Magazine, Der Spiegel, BBC Radio, 3Sat Nano and NOS. |
Dariu Gavrila 🔗 |
Fri 7:50 a.m. - 8:00 a.m.
|
Q&A: Dariu Gavrila
(
Q&A
)
link »
Zoom webinar: https://us02web.zoom.us/j/83855151644?pwd=TGhQRXppZ1pKUmxURHYvU1RDbzBWUT09 |
Dariu Gavrila 🔗 |
Fri 8:00 a.m. - 8:30 a.m.
|
Panel Discussion 1
(
Panel Discussion
)
link »
Zoom webinar: https://us02web.zoom.us/j/83855151644?pwd=TGhQRXppZ1pKUmxURHYvU1RDbzBWUT09 |
Daniel Cremers · Nemanja Djuric · Ingmar Posner · Dariu Gavrila 🔗 |
Fri 8:30 a.m. - 8:35 a.m.
|
Paper presentation opening
(
Q&A
)
link »
Zoom webinar: https://us02web.zoom.us/j/83855151644?pwd=TGhQRXppZ1pKUmxURHYvU1RDbzBWUT09 |
Rowan McAllister · Li Erran Li · Adrien Gaidon · Sven Kreiss · Wei-Lun (Harry) Chao 🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper Q&A session 1
(
Q&A
)
Zoom links: Multi-agent Graph Reinforcement Learning for Connected Automated Driving https://us02web.zoom.us/j/83693184668?pwd=SEZWTGpjT1loaFZORzVQUXUvNG12QT09 Autonomous Driving with Reinforcement Learning and Rule-based Policies https://us02web.zoom.us/j/85016432368?pwd=aU9qVFJqVmZ0WW5LWmNjSXJlLzJHQT09 Imitation Learning Approach for AI Driving Olympics Trained on Real-world and Simulation Data Simultaneously https://us02web.zoom.us/j/89707469981?pwd=R3pMcGJYajJVZ1VRbURsOFdYSC9hdz09 Deep Representation Learning and Clustering of Traffic Scenarios https://us02web.zoom.us/j/88245648710?pwd=T2p1ZVBVRDhYSXRxR3N2dzlFR3VnQT09 Depth Meets CNN: A Fusion Based Approach for Semantic Road Segmentation https://us02web.zoom.us/j/83158491638?pwd=QzBodjFKaHk5bzZKZnNUNUFzMXlGUT09 SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving https://us02web.zoom.us/j/89198067302?pwd=WG14dGMrRUNlUVRWTUNXMEZVcFNqZz09 Trajectograms: Which Semi-Supervised Trajectory Prediction Model to Use? https://us02web.zoom.us/j/82135269788?pwd=RzNvRklFSUp6S3VWWFdGWWkrOWlYZz09 Learning Multiplicative Interactions with Bayesian Neural Networks for Visual-Inertial Odometry https://us02web.zoom.us/j/82017234842?pwd=a2RTTXVJODBqeGE3NWdBdmZpTGRKUT09 Learning Invariant Representations for Reinforcement Learning without Reconstruction https://us02web.zoom.us/j/89706077961?pwd=cm0yY3UwZ3g4eXRKNDY0aDZyaXVCdz09 Towards Map-Based Validation of Semantic Segmentation Masks https://us02web.zoom.us/j/89373300557?pwd=REZRMXZrdXJKQ0NTOU91TzFzVTNvZz09 Probabilistic Object Detection: Strengths, Weaknesses, Opportunities https://us02web.zoom.us/j/85454121324?pwd=d0tCb25OL2toYjBaUkc0aFNidk9aZz09 Interpretable End-to-end Autonomous Driving with Reinforcement Learning https://us02web.zoom.us/j/82247999794?pwd=WHdMWTBJZnhrVktqUXVEYnJiTjdLUT09 Can Autonomous Vehicles Identify, Recover From, and Adapt to Distribution Shifts? https://us02web.zoom.us/j/84540391510?pwd=SUdCeVNpcE5iMTFCN0NmMGtUS3JKQT09 INSTA-YOLO: Real-Time Instance Segmentation based on YOLO https://us02web.zoom.us/j/84371112426?pwd=by9QWmYxUzFaQXArRExQdkt1azZKQT09 |
🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper spotlight: Multi-agent Graph Reinforcement Learning for Connected Automated Driving
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38931756/multiagent-graph-reinforcement-learning-for-connected-automated-driving Individual Zoom meeting: https://us02web.zoom.us/j/83693184668?pwd=SEZWTGpjT1loaFZORzVQUXUvNG12QT09 |
TIANYU SHI 🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper spotlight: Autonomous Driving with Reinforcement Learning and Rule-based Policies
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38931743 Individual Zoom meeting: https://us02web.zoom.us/j/85016432368?pwd=aU9qVFJqVmZ0WW5LWmNjSXJlLzJHQT09 |
Amarildo Likmeta 🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper spotlight: Imitation Learning Approach for AI Driving Olympics Trained on Real-world and Simulation Data Simultaneously
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38931755 Individual Zoom meeting: https://us02web.zoom.us/j/89707469981?pwd=R3pMcGJYajJVZ1VRbURsOFdYSC9hdz09 |
Mikita Sazanovich 🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper spotlight: Deep Representation Learning and Clustering of Traffic Scenarios
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38931754 Individual Zoom meeting: https://us02web.zoom.us/j/88245648710?pwd=T2p1ZVBVRDhYSXRxR3N2dzlFR3VnQT09 |
Nick Harmening · Stephan Günnemann · Marin Biloš 🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper spotlight: Depth Meets CNN: A Fusion Based Approach for Semantic Road Segmentation
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38931753 Individual Zoom meeting: https://us02web.zoom.us/j/83158491638?pwd=QzBodjFKaHk5bzZKZnNUNUFzMXlGUT09 |
Abhinav Atrishi · Deepak Singh · Sarthak Gupta · Raghav Marwaha 🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper spotlight: SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38931752 Individual Zoom meeting: https://us02web.zoom.us/j/89198067302?pwd=WG14dGMrRUNlUVRWTUNXMEZVcFNqZz09 |
Eren AKSOY 🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper spotlight: Trajectograms: Which Semi-Supervised Trajectory Prediction Model to Use?
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38931751 Individual Zoom meeting: https://us02web.zoom.us/j/82135269788?pwd=RzNvRklFSUp6S3VWWFdGWWkrOWlYZz09 |
Nick Lamm · Iddo Drori · Shashank Jaiprakash 🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper spotlight: Learning Multiplicative Interactions with Bayesian Neural Networks for Visual-Inertial Odometry
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38931750 Individual Zoom meeting: https://us02web.zoom.us/j/82017234842?pwd=a2RTTXVJODBqeGE3NWdBdmZpTGRKUT09 |
Kashmira Shinde · Jongseok Lee 🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper spotlight: Learning Invariant Representations for Reinforcement Learning without Reconstruction
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38931749 Individual Zoom meeting: https://us02web.zoom.us/j/89706077961?pwd=cm0yY3UwZ3g4eXRKNDY0aDZyaXVCdz09 |
Amy Zhang 🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper spotlight: Towards Map-Based Validation of Semantic Segmentation Masks
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38931748 Individual Zoom meeting: https://us02web.zoom.us/j/89373300557?pwd=REZRMXZrdXJKQ0NTOU91TzFzVTNvZz09 |
Laura von Rueden 🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper spotlight: Probabilistic Object Detection: Strengths, Weaknesses, Opportunities
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38931747 Individual Zoom meeting: https://us02web.zoom.us/j/85454121324?pwd=d0tCb25OL2toYjBaUkc0aFNidk9aZz09 |
Dhaivat Jitendra Bhatt 🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper spotlight: Interpretable End-to-end Autonomous Driving with Reinforcement Learning
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38931746 Individual Zoom meeting: https://us02web.zoom.us/j/82247999794?pwd=WHdMWTBJZnhrVktqUXVEYnJiTjdLUT09 |
Jianyu Chen 🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper spotlight: Can Autonomous Vehicles Identify, Recover From, and Adapt to Distribution Shifts?
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38931745 Individual Zoom meeting: https://us02web.zoom.us/j/84540391510?pwd=SUdCeVNpcE5iMTFCN0NmMGtUS3JKQT09 |
Angelos Filos · Panagiotis Tigas 🔗 |
Fri 8:35 a.m. - 9:40 a.m.
|
Paper spotlight: INSTA-YOLO: Real-Time Instance Segmentation based on YOLO
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38931744 Individual Zoom meeting: https://us02web.zoom.us/j/84371112426?pwd=by9QWmYxUzFaQXArRExQdkt1azZKQT09 |
Eslam Mohamed Abd El Rahman 🔗 |
Fri 10:25 a.m. - 10:30 a.m.
|
Open Remark 2
(
Talk
)
link »
Zoom webinar: https://us02web.zoom.us/j/83855151644?pwd=TGhQRXppZ1pKUmxURHYvU1RDbzBWUT09 |
Wei-Lun (Harry) Chao · Sven Kreiss · Rowan McAllister · Li Erran Li · Adrien Gaidon 🔗 |
Fri 10:30 a.m. - 11:00 a.m.
|
Invited Talk: Neural Motion Planning for Self-Driving (Raquel Urtasun)
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38930753/neural-motion-planning-for-selfdriving Bio: Raquel Urtasun is the Chief Scientist for Uber ATG and the Head of Uber ATG Toronto. She is also a Professor at the University of Toronto, a Canada Research Chair in Machine Learning and Computer Vision and a co-founder of the Vector Institute for AI. She received her Ph.D. from the Ecole Polytechnique Federal de Lausanne (EPFL) in 2006 and did her postdoc at MIT and UC Berkeley. She is a recipient of an NSERC EWR Steacie Award, an NVIDIA Pioneers of AI Award, a Ministry of Education and Innovation Early Researcher Award, three Google Faculty Research Awards, an Amazon Faculty Research Award, a Connaught New Researcher Award, a Fallona Family Research Award and two Best Paper Runner up Prize awarded CVPR in 2013 and 2017. She was also named Chatelaine 2018 Woman of the year, and 2018 Toronto's top influencers by Adweek magazine. |
Raquel Urtasun 🔗 |
Fri 11:00 a.m. - 11:10 a.m.
|
Q&A: Raquel Urtasun
(
Q&A
)
link »
Zoom webinar: https://us02web.zoom.us/j/83855151644?pwd=TGhQRXppZ1pKUmxURHYvU1RDbzBWUT09 |
Raquel Urtasun 🔗 |
Fri 11:10 a.m. - 11:40 a.m.
|
Invited Talk: Autonomous Driving: The Way Forward (Vladlen Koltun)
(
Talk
)
link »
The video link is here: https://youtu.be/XmtTjqimW3g If you have questions to Vladlen Koltun, please contact him by http://vladlen.info/contact/ Bio: Vladlen Koltun is the Chief Scientist for Intelligent Systems at Intel. He directs the Intelligent Systems Lab, which conducts high-impact basic research in computer vision, machine learning, robotics, and related areas. He has mentored more than 50 PhD students, postdocs, research scientists, and PhD student interns, many of whom are now successful research leaders. Web: http://vladlen.info |
Vladlen Koltun 🔗 |
Fri 11:40 a.m. - 12:00 p.m.
|
Coffee break
|
🔗 |
Fri 12:00 p.m. - 12:30 p.m.
|
Invited Talk: What we learned from Argoverse Competitions (James Hays)
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38930752/what-we-learned-from-argoverse-competitions Abstract: This talk will have two parts. First, I'll discuss what we've learned from Argoverse competitions in 2020 and 2019. We'll analyze the strategies used by the top scoring teams in 3D tracking and Motion forecasting, and examine situations where there is still room for improvement. In the second part, I'll discuss the "inflation" of 2D instance segmentations into 3D cuboids suitable for training 3D object detectors. With the help of an HD map, 2D instance masks can be converted into surprisingly accurate 3D training data for LiDAR-based detectors. We show that we can mine 3D cuboids from unlabeled self-driving logs and train a 3D detector that outperforms a human-supervised baseline. Bio: James Hays is an associate professor of computing at Georgia Institute of Technology since fall 2015. Since 2017, He also works with Argo AI to create self-driving cars. Previously, he was the Manning assistant professor of computer science at Brown University. He received his Ph.D. from Carnegie Mellon University and was a postdoc at Massachusetts Institute of Technology. His research interests span computer vision, robotics, and machine learning. His research often involves exploiting non-traditional data sources (e.g. internet imagery, crowdsourced annotations, thermal imagery, human sketches, autonomous vehicle sensor data) to explore new research problems (e.g. global geolocalization, sketch to real, hand-object contact prediction). |
James Hays 🔗 |
Fri 12:30 p.m. - 12:40 p.m.
|
Q&A: James Hays
(
Q&A
)
link »
Zoom webinar: https://us02web.zoom.us/j/83855151644?pwd=TGhQRXppZ1pKUmxURHYvU1RDbzBWUT09 |
James Hays 🔗 |
Fri 12:40 p.m. - 1:10 p.m.
|
Invited Talk: Feedback in Imitation Learning: Confusion on Causality and Covariate Shift (Arun Venkatraman & Sanjiban Choudhury)
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38930758/feedback-in-imitation-learning Abstract: Imitation learning practitioners have often noted that adding previous actions as a feature leads to a dramatic divergence between “held out” error and performance of the learner in situ. Interactive approaches (Ross et al., 2011; de Haan et al.,2019) can provably address this divergence but require repeated querying of a demonstrator. Recent work identifies this divergence as stemming from a “causal confound” (Pearl et al., 2016) in predicting the current action, and seek to ablate away past actions using tools from causal inference. In this work, we first conclude that neither the stated model nor the experimental setups exhibit any causal confounding, and thus this cannot explain the empirical observations. We note that in these settings of feedback between decisions and features, the learner comes to rely on features that are strongly predictive of decisions but are also subject to strong covariate shift. Our work demonstrates a broad class of problems where this shift can be mitigated, both theoretically and practically, by taking advantage of a simulator but without any further querying of expert demonstration. We evaluate our approach on several benchmark control domains and show that it outperforms other baselines that use only such cached demonstrations. Bio: Arun Venkatraman is a founding engineer at Aurora, the company delivering self-driving technology safely, quickly, and broadly. Arun graduated with a BS with Honors from the California Institute of Technology and completed his PhD, Training Strategies for Time Series: Learning for Prediction, Filtering, and Reinforcement Learning, at the Robotics Institute at Carnegie Mellon University co-advised by Dr. Drew Bagnell and Dr. Martial Hebert. During his time at CMU and NREC, Arun worked on a variety of robotics applications and received a best paper award at Robotics Science and Systems 2015 for work on autonomy assisted teleoperation via a brain-computer interface. At Aurora, Arun leads the Motion Planning Machine Learning team, bringing together the best in machine learning with the best practices in robotics development to develop the Aurora Driver. Sanjiban Choudhury is a research engineer at Aurora, where he works with the best to solve self-driving at scale. He focuses on theory and algorithms at the intersection of machine learning and motion planning. Much of his research has been deployed on real-world robotic systems - full-scale helicopters, self-driving cars and mobile manipulators. He has a PhD from The Robotics Institute at Carnegie Mellon University, where he was advised by Sebastian Scherer. His thesis showed how robots can learn from prior experience to speed up online planning. He was a Postdoctoral fellow at the University of Washington, CSE where he worked with Sidd Srinivasa. He is the recipient of best paper awards at AHS 2014 and ICAPS 2019, winner of the 2018 Howard Hughes award and a 2013 Siebel’s Scholar. |
Sanjiban Choudhury · Arun Venkatraman 🔗 |
Fri 1:10 p.m. - 1:20 p.m.
|
Q&A: Arun Venkatraman & Sanjiban Choudhury
(
Q&A
)
link »
https://us02web.zoom.us/j/83855151644?pwd=TGhQRXppZ1pKUmxURHYvU1RDbzBWUT09 |
Sanjiban Choudhury · Arun Venkatraman 🔗 |
Fri 1:20 p.m. - 1:35 p.m.
|
Invited Talk: INTERPRET: INTERACTION-dataset-based PREdicTion Challenge (Wei Zhan)
(
Talk
)
link »
SlidesLive Video » Video: https://slideslive.com/38930879/interpret-interactiondatasetbased-prediction-challenge Abstract: It is a consensus in both academia and industry that behavior prediction is one of the most challenging problems blocking the realization of fully autonomous vehicles. It is a key asset for the behavior-related research community to have motion datasets with highly interactive driving behavior and critical situations in complex scenarios with different driving cultures. Prediction benchmarks with comprehensive evaluations are also crucial. This talk presents the INTERACTION dataset, which provides the highly accurate trajectories of various road users with densely interactive and critical behavior from different countries. Corresponding HD maps with full semantics of lane connections and traffic rules are also included in the dataset. The prediction challenge based on the INTERACTION dataset, INTERPRET as a NeurIPS’20 Competition, is also presented in this talk. The challenge offers multiple tracks to test the capabilities of the prediction model on data approximation, generalizability, as well as fatality in open-loop and closed-loop. The results on the leaderboard in the preliminary stage of the challenge are also briefly mentioned. Bio: Wei Zhan is a Postdoctoral Scholar at UC Berkeley working with Professor Masayoshi Tomizuka. He received his Ph.D. from UC Berkeley in 2019. His research focus is interactive prediction and planning for autonomous driving, and his research interests span robotics, control, computer vision and machine learning. He has been coordinating the research activities in Autonomous Driving Group in Mechanical Systems Control Lab for years, from perception and prediction to decision and control on real autonomous vehicles. One of his publications on probabilistic prediction received the Best Student Paper Award in IEEE Intelligent Vehicle Symposium 2018 (IV’18). He is the lead author of the INTERACTION dataset, which provides highly interactive driving behavior in various complex scenarios from different countries. He is a key organizer of the prediction challenge based on the INTERACTION dataset as a NeurIPS’20 Competition. He also organized several workshops on Behavior Prediction and Decision (IV'19), Prediction Dataset and Benchmark (IROS'19), and Socially Compatible Behavior Generation (IV'20). |
Wei Zhan 🔗 |
Fri 1:35 p.m. - 1:40 p.m.
|
Q&A: Wei Zhan
(
Q&A
)
link »
Zoom webinar: https://us02web.zoom.us/j/83855151644?pwd=TGhQRXppZ1pKUmxURHYvU1RDbzBWUT09 |
Wei Zhan 🔗 |
Fri 1:40 p.m. - 2:10 p.m.
|
Panel Discussion 2
(
Panel Discussion
)
link »
Zoom webinar: https://us02web.zoom.us/j/83855151644?pwd=TGhQRXppZ1pKUmxURHYvU1RDbzBWUT09 |
James Hays · James Bagnell · Raquel Urtasun 🔗 |
Fri 2:10 p.m. - 2:15 p.m.
|
Closing remark (best paper award: sponsored by NVIDIA)
(
Talk
)
link »
Zoom webinar: https://us02web.zoom.us/j/83855151644?pwd=TGhQRXppZ1pKUmxURHYvU1RDbzBWUT09 |
Wei-Lun (Harry) Chao · Rowan McAllister · Li Erran Li · Adrien Gaidon · Sven Kreiss 🔗 |
Fri 2:15 p.m. - 3:00 p.m.
|
Paper Q&A session 2
(
Q&A
)
Zoom links: Multi-agent Graph Reinforcement Learning for Connected Automated Driving https://us02web.zoom.us/j/83693184668?pwd=SEZWTGpjT1loaFZORzVQUXUvNG12QT09 Autonomous Driving with Reinforcement Learning and Rule-based Policies https://us02web.zoom.us/j/85016432368?pwd=aU9qVFJqVmZ0WW5LWmNjSXJlLzJHQT09 Imitation Learning Approach for AI Driving Olympics Trained on Real-world and Simulation Data Simultaneously https://us02web.zoom.us/j/89707469981?pwd=R3pMcGJYajJVZ1VRbURsOFdYSC9hdz09 Deep Representation Learning and Clustering of Traffic Scenarios https://us02web.zoom.us/j/88245648710?pwd=T2p1ZVBVRDhYSXRxR3N2dzlFR3VnQT09 Depth Meets CNN: A Fusion Based Approach for Semantic Road Segmentation https://us02web.zoom.us/j/83158491638?pwd=QzBodjFKaHk5bzZKZnNUNUFzMXlGUT09 SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving https://us02web.zoom.us/j/89198067302?pwd=WG14dGMrRUNlUVRWTUNXMEZVcFNqZz09 Trajectograms: Which Semi-Supervised Trajectory Prediction Model to Use? https://us02web.zoom.us/j/82135269788?pwd=RzNvRklFSUp6S3VWWFdGWWkrOWlYZz09 Learning Multiplicative Interactions with Bayesian Neural Networks for Visual-Inertial Odometry https://us02web.zoom.us/j/82017234842?pwd=a2RTTXVJODBqeGE3NWdBdmZpTGRKUT09 Learning Invariant Representations for Reinforcement Learning without Reconstruction https://us02web.zoom.us/j/89706077961?pwd=cm0yY3UwZ3g4eXRKNDY0aDZyaXVCdz09 Towards Map-Based Validation of Semantic Segmentation Masks https://us02web.zoom.us/j/89373300557?pwd=REZRMXZrdXJKQ0NTOU91TzFzVTNvZz09 Probabilistic Object Detection: Strengths, Weaknesses, Opportunities https://us02web.zoom.us/j/85454121324?pwd=d0tCb25OL2toYjBaUkc0aFNidk9aZz09 Interpretable End-to-end Autonomous Driving with Reinforcement Learning https://us02web.zoom.us/j/82247999794?pwd=WHdMWTBJZnhrVktqUXVEYnJiTjdLUT09 Can Autonomous Vehicles Identify, Recover From, and Adapt to Distribution Shifts? https://us02web.zoom.us/j/84540391510?pwd=SUdCeVNpcE5iMTFCN0NmMGtUS3JKQT09 INSTA-YOLO: Real-Time Instance Segmentation based on YOLO https://us02web.zoom.us/j/84371112426?pwd=by9QWmYxUzFaQXArRExQdkt1azZKQT09 |
🔗 |
Author Information
Wei-Lun (Harry) Chao (Ohio State University)
Rowan McAllister (UC Berkeley)
Adrien Gaidon (Toyota Research Institute)
Li Erran Li (Alexa AI, Amazon)
Sven Kreiss (EPFL)
More from the Same Authors
-
2022 Poster: Object Permanence Emerges in a Random Walk along Memory »
Pavel Tokmakov · Allan Jabri · Jie Li · Adrien Gaidon -
2022 Spotlight: Object Permanence Emerges in a Random Walk along Memory »
Pavel Tokmakov · Allan Jabri · Jie Li · Adrien Gaidon -
2021 Workshop: ICML Workshop on Human in the Loop Learning (HILL) »
Trevor Darrell · Xin Wang · Li Erran Li · Fisher Yu · Zeynep Akata · Wenwu Zhu · Pradeep Ravikumar · Shiji Zhou · Shanghang Zhang · Kalesha Bullard -
2021 Workshop: Machine Learning for Data: Automated Creation, Privacy, Bias »
Zhiting Hu · Li Erran Li · Willie Neiswanger · Benedikt Boecking · Yi Xu · Belinda Zeng -
2020 : Closing remark (best paper award: sponsored by NVIDIA) »
Wei-Lun (Harry) Chao · Rowan McAllister · Li Erran Li · Adrien Gaidon · Sven Kreiss -
2020 : Open Remark 2 »
Wei-Lun (Harry) Chao · Sven Kreiss · Rowan McAllister · Li Erran Li · Adrien Gaidon -
2020 : Paper presentation opening »
Rowan McAllister · Li Erran Li · Adrien Gaidon · Sven Kreiss · Wei-Lun (Harry) Chao -
2020 : Open Remark 1 »
Wei-Lun (Harry) Chao · Rowan McAllister · Li Erran Li · Sven Kreiss · Adrien Gaidon -
2020 Poster: Can Autonomous Vehicles Identify, Recover From, and Adapt to Distribution Shifts? »
Angelos Filos · Panagiotis Tigas · Rowan McAllister · Nicholas Rhinehart · Sergey Levine · Yarin Gal -
2019 Workshop: Workshop on AI for autonomous driving »
Anna Choromanska · Larry Jackel · Li Erran Li · Juan Carlos Niebles · Adrien Gaidon · Wei-Lun Chao · Ingmar Posner · Wei-Lun (Harry) Chao