Timezone: »

 
Sergey Levine: "Imitation, Prediction, and Model-Based Reinforcement Learning for Autonomous Driving"
Sergey Levine

Sat Jun 15 10:50 AM -- 11:15 AM (PDT) @

While machine learning has transformed passive perception -- computer vision, speech recognition, NLP -- its impact on autonomous control in real-world robotic systems has been limited due to reservations about safety and reliability. In this talk, I will discuss how end-to-end learning for control can be framed in a way that is data-driven, reliable and, crucially, easy to merge with existing model-based control pipelines based on planning and state estimation. The basic building blocks of this approach to control are generative models that estimate which states are safe and familiar, and model-based reinforcement learning, which can utilize these generative models within a planning and control framework to make decisions. By framing the end-to-end control problem as one of prediction and generation, we can make it possible to use large datasets collected by previous behavioral policies, as well as human operators, estimate confidence or familiarity of new observations to detect "unknown unknowns," and analyze the performance of our end-to-end models on offline data prior to live deployment. I will discuss how model-based RL can enable navigation and obstacle avoidance, how generative models can detect uncertain and unsafe situations, and then discuss how these pieces can be put together into the framework of deep imitative models: generative models trained via imitation of human drivers that can be incorporated into model-based control for autonomous driving, and can reason about future behavior and intentions of other drivers on the road. Finally, I will conclude with a discussion of current research that is likely to make an impact on autonomous driving and safety-critical AI systems in the near future, including meta-learning, off-policy reinforcement learning, and pixel-level video prediction models.

Author Information

Sergey Levine (UC Berkeley)
Sergey Levine

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.

More from the Same Authors