Timezone: »

Invited Speaker: Sergey Levine
Sergey Levine

Title: Learning Plannable Representations and Planning with Learnable Skills

Abstract: Reinforcement learning provides a powerful framework for automatically learning control policies for autonomous systems. However, using RL in real-world settings, particularly complex and safety-critical domains such as autonomous driving, is often impractical due to the need for costly and dangerous exploration. In this talk, I will discuss how the framework of offline reinforcement learning can drastically expand the applicability of RL to real-world settings, discuss the fundamentals of offline RL algorithms and recent innovations, and some of our recent experience applying RL to problems in robotics and mobility.

Author Information

Sergey Levine (UC Berkeley)
Sergey Levine

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.

More from the Same Authors