Timezone: »
A diverse set of methods have been devised to develop autonomous driving platforms. They range from modular systems, systems that perform manual decomposition of the problem, systems where the components are optimized independently, and a large number of rules are programmed manually, to end-to-end deep-learning frameworks. Today’s systems rely on a subset of the following: camera images, HD maps, inertial measurement units, wheel encoders, and active 3D sensors (LIDAR, radar). There is a general agreement that much of the self-driving software stack will continue to incorporate some form of machine learning in any of the above mentioned systems in the future.
Self-driving cars present one of today’s greatest challenges and opportunities for Artificial Intelligence (AI). Despite substantial investments, existing methods for building autonomous vehicles have not yet succeeded, i.e., there are no driverless cars on public roads today without human safety drivers. Nevertheless, a few groups have started working on extending the idea of learned tasks to larger functions of autonomous driving. Initial results on learned road following are very promising.
The goal of this workshop is to explore ways to create a framework that is capable of learning autonomous driving capabilities beyond road following, towards fully driverless cars. The workshop will consider the current state of learning applied to autonomous vehicles and will explore how learning may be used in future systems. The workshop will span both theoretical frameworks and practical issues especially in the area of deep learning.
Sat 9:00 a.m. - 9:15 a.m.
|
Opening Remarks
(
Talk
)
|
🔗 |
Sat 9:15 a.m. - 9:40 a.m.
|
Sven Kreiss: "Compositionality, Confidence and Crowd Modeling for Self-Driving Cars"
(
Talk
)
I will present our recent works related to the three AI pillars of a self-driving car: perception, prediction, and planning. For the perception pillar, I will present new human pose estimation and monocular distance estimation methods that use a loss that learns its own confidence, the Laplace loss. For prediction, I will show our investigations on interpretable models where we apply deep learning techniques within structured and hand-crafted classical models for path prediction in social contexts. For the third pillar, planning, I will show our crowd-robot interaction module that uses attention-based representation learning suitable for planning in an RL environment with multiple people. |
Sven Kreiss · Alexandre Alahi 🔗 |
Sat 9:40 a.m. - 10:05 a.m.
|
Mayank Bansal: "ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst"
(
Talk
)
Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert's driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress -- the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the ChauffeurNet model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a real car at our test facility. |
Mayank Bansal 🔗 |
Sat 10:05 a.m. - 10:30 a.m.
|
Chelsea Finn: "A Practical View on Generalization and Autonomy in the Real World"
(
Talk
)
|
Chelsea Finn 🔗 |
Sat 10:50 a.m. - 11:15 a.m.
|
Sergey Levine: "Imitation, Prediction, and Model-Based Reinforcement Learning for Autonomous Driving"
(
Talk
)
While machine learning has transformed passive perception -- computer vision, speech recognition, NLP -- its impact on autonomous control in real-world robotic systems has been limited due to reservations about safety and reliability. In this talk, I will discuss how end-to-end learning for control can be framed in a way that is data-driven, reliable and, crucially, easy to merge with existing model-based control pipelines based on planning and state estimation. The basic building blocks of this approach to control are generative models that estimate which states are safe and familiar, and model-based reinforcement learning, which can utilize these generative models within a planning and control framework to make decisions. By framing the end-to-end control problem as one of prediction and generation, we can make it possible to use large datasets collected by previous behavioral policies, as well as human operators, estimate confidence or familiarity of new observations to detect "unknown unknowns," and analyze the performance of our end-to-end models on offline data prior to live deployment. I will discuss how model-based RL can enable navigation and obstacle avoidance, how generative models can detect uncertain and unsafe situations, and then discuss how these pieces can be put together into the framework of deep imitative models: generative models trained via imitation of human drivers that can be incorporated into model-based control for autonomous driving, and can reason about future behavior and intentions of other drivers on the road. Finally, I will conclude with a discussion of current research that is likely to make an impact on autonomous driving and safety-critical AI systems in the near future, including meta-learning, off-policy reinforcement learning, and pixel-level video prediction models. |
Sergey Levine 🔗 |
Sat 11:15 a.m. - 11:40 a.m.
|
Wolfram Burgard
(
Talk
)
|
Wolfram Burgard 🔗 |
Sat 11:40 a.m. - 12:05 p.m.
|
Dorsa Sadigh: "Influencing Interactive Mixed-Autonomy Systems"
(
Talk
)
|
Dorsa Sadigh 🔗 |
Sat 1:30 p.m. - 2:30 p.m.
|
Poster Session
|
Heejin Jeong · Jonah Philion 🔗 |
Sat 2:30 p.m. - 2:55 p.m.
|
Alexander Amini: "Learning to Drive with Purpose"
(
Talk
)
Deep learning has revolutionized the ability to learn "end-to-end" autonomous vehicle control directly from raw sensory data. In recent years, there have been advances to handle more complex forms of navigational instruction. However, these networks are still trained on biased human driving data (yielding biased models), and are unable to capture the full distribution of possible actions that could be taken. By learning a set of unsupervised latent variables that characterize the training data, we present an online debiasing algorithm for autonomous driving. Additionally, we extend end-to-end driving networks with the ability to drive with purpose and perform point-to-point navigation. We formulate how our model can be used to also localize the robot according to correspondences between the map and the observed visual road topology, inspired by the rough localization that human drivers can perform, even in cases where GPS is noisy or removed all together. Our results highlight the importance of bridging the benefits from end-to-end learning with classical probabilistic reasoning and Bayesian inference to push the boundaries of autonomous driving. |
Alexander Amini 🔗 |
Sat 2:55 p.m. - 3:20 p.m.
|
Fisher Yu: "Motion and Prediction for Autonomous Driving"
(
Talk
)
|
Fisher Yu · Trevor Darrell 🔗 |
Sat 3:20 p.m. - 3:45 p.m.
|
Alfredo Canziani: "Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic "
(
Talk
)
Learning a policy using only observational data is challenging because the distribution of states it induces at execution time may differ from the distribution observed during training. In this work, we propose to train a policy while explicitly penalizing the mismatch between these two distributions over a fixed time horizon. We do this by using a learned model of the environment dynamics which is unrolled for multiple time steps, and training a policy network to minimize a differentiable cost over this rolled-out trajectory. This cost contains two terms: a policy cost which represents the objective the policy seeks to optimize, and an uncertainty cost which represents its divergence from the states it is trained on. We propose to measure this second cost by using the uncertainty of the dynamics model about its own predictions, using recent ideas from uncertainty estimation for deep networks. We evaluate our approach using a large-scale observational dataset of driving behavior recorded from traffic cameras, and show that we are able to learn effective driving policies from purely observational data, with no environment interaction. |
Alfredo Canziani 🔗 |
Sat 4:05 p.m. - 4:30 p.m.
|
Jianxiong Xiao: "Self-driving Car: What we can achieve today?"
(
Talk
)
|
Jianxiong Xiao 🔗 |
Sat 4:30 p.m. - 4:55 p.m.
|
German Ros: "Fostering Autonomous Driving Research with CARLA"
(
Talk
)
This talk focuses on the relevance of open source solutions to foster autonomous driving research and development. To this end, we present how CARLA has been used within the research community in the last year and what results has it enabled. We will also cover the CARLA Autonomous Driving Challenge and its relevance as an open benchmark for the driving community. We will share with the community new soon-to-be-released features and the future direction of the CARLA simulation platform. |
German Ros 🔗 |
Sat 4:55 p.m. - 5:20 p.m.
|
Venkatraman Narayanan: "The Promise and Challenge of ML in Self-Driving"
(
Talk
)
To deliver the benefits of autonomous driving safely, quickly, and broadly, learnability has to be a key element of the solution. In this talk, I will describe Aurora's philosophy towards building learnability into the self-driving architecture, avoiding the pitfalls of applying vanilla ML to problems involving feedback, and leveraging expert demonstrations for learning decision-making models. I will conclude with our approach to testing and validation. |
Venkatraman Narayanan · James Bagnell 🔗 |
Sat 5:20 p.m. - 6:40 p.m.
|
Best Paper Award and Panel Discussion
(
Panel Discussion
)
|
🔗 |
Author Information
Anna Choromanska (NYU Tandon School of Engineering)
Larry Jackel (North-C Technologies)
Li Erran Li (Scale AI)
Juan Carlos Niebles (Stanford)
Adrien Gaidon (Toyota Research Institute)
Wei-Lun Chao (Cornell)
Ingmar Posner (University of Oxford)
Wei-Lun (Harry) Chao (Ohio State University Cornell University)
More from the Same Authors
-
2022 Poster: Object Permanence Emerges in a Random Walk along Memory »
Pavel Tokmakov · Allan Jabri · Jie Li · Adrien Gaidon -
2022 Spotlight: Object Permanence Emerges in a Random Walk along Memory »
Pavel Tokmakov · Allan Jabri · Jie Li · Adrien Gaidon -
2021 Workshop: ICML Workshop on Human in the Loop Learning (HILL) »
Trevor Darrell · Xin Wang · Li Erran Li · Fisher Yu · Zeynep Akata · Wenwu Zhu · Pradeep Ravikumar · Shiji Zhou · Shanghang Zhang · Kalesha Bullard -
2021 Workshop: Machine Learning for Data: Automated Creation, Privacy, Bias »
Zhiting Hu · Li Erran Li · Willie Neiswanger · Benedikt Boecking · Yi Xu · Belinda Zeng -
2020 : Discussion Panel »
Krzysztof Dembczynski · Prateek Jain · Alina Beygelzimer · Inderjit Dhillon · Anna Choromanska · Maryam Majzoubi · Yashoteja Prabhu · John Langford -
2020 : Closing remark (best paper award: sponsored by NVIDIA) »
Wei-Lun (Harry) Chao · Rowan McAllister · Li Erran Li · Adrien Gaidon · Sven Kreiss -
2020 : Open Remark 2 »
Wei-Lun (Harry) Chao · Sven Kreiss · Rowan McAllister · Li Erran Li · Adrien Gaidon -
2020 : Paper presentation opening »
Rowan McAllister · Li Erran Li · Adrien Gaidon · Sven Kreiss · Wei-Lun (Harry) Chao -
2020 : Panel Discussion 1 »
Daniel Cremers · Nemanja Djuric · Ingmar Posner · Dariu Gavrila -
2020 : Q&A: Ingmar Posner »
Ingmar Posner -
2020 : Invited Talk: Under the Radar: System-Level Self-Supervision for Radar Perception and Navigation (Ingmar Posner) »
Ingmar Posner -
2020 Workshop: Workshop on eXtreme Classification: Theory and Applications »
Anna Choromanska · John Langford · Maryam Majzoubi · Yashoteja Prabhu -
2020 Workshop: Workshop on AI for Autonomous Driving (AIAD) »
Wei-Lun (Harry) Chao · Rowan McAllister · Adrien Gaidon · Li Erran Li · Sven Kreiss -
2020 : Open Remark 1 »
Wei-Lun (Harry) Chao · Rowan McAllister · Li Erran Li · Sven Kreiss · Adrien Gaidon -
2020 Poster: Learning to Score Behaviors for Guided Policy Optimization »
Aldo Pacchiano · Jack Parker-Holder · Yunhao Tang · Krzysztof Choromanski · Anna Choromanska · Michael Jordan -
2019 Poster: On the Limitations of Representing Functions on Sets »
Edward Wagstaff · Fabian Fuchs · Martin Engelcke · Ingmar Posner · Michael A Osborne -
2019 Oral: On the Limitations of Representing Functions on Sets »
Edward Wagstaff · Fabian Fuchs · Martin Engelcke · Ingmar Posner · Michael A Osborne -
2018 Poster: TACO: Learning Task Decomposition via Temporal Alignment for Control »
Kyriacos Shiarlis · Markus Wulfmeier · Sasha Salter · Shimon Whiteson · Ingmar Posner -
2018 Oral: TACO: Learning Task Decomposition via Temporal Alignment for Control »
Kyriacos Shiarlis · Markus Wulfmeier · Sasha Salter · Shimon Whiteson · Ingmar Posner