Timezone: »
Model-based reinforcement learning methods often use learning only for the purpose of recovering an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers.While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization.In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical.The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories.We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility.
Author Information
Michael Janner (UC Berkeley)
Yilun Du (MIT)
Josh Tenenbaum (MIT)
Joshua Brett Tenenbaum is Professor of Cognitive Science and Computation at the Massachusetts Institute of Technology. He is known for contributions to mathematical psychology and Bayesian cognitive science. He previously taught at Stanford University, where he was the Wasow Visiting Fellow from October 2010 to January 2011. Tenenbaum received his undergraduate degree in physics from Yale University in 1993, and his Ph.D. from MIT in 1999. His work primarily focuses on analyzing probabilistic inference as the engine of human cognition and as a means to develop machine learning.
Sergey Levine (UC Berkeley)

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Planning with Diffusion for Flexible Behavior Synthesis »
Wed. Jul 20th through Thu the 21st Room Hall E #817
More from the Same Authors
-
2021 : Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability »
Dibya Ghosh · Jad Rahme · Aviral Kumar · Amy Zhang · Ryan P. Adams · Sergey Levine -
2021 : Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Aaron Courville · Tengyu Ma · George Tucker · Sergey Levine -
2021 : Multi-Task Offline Reinforcement Learning with Conservative Data Sharing »
Tianhe (Kevin) Yu · Aviral Kumar · Yevgen Chebotar · Karol Hausman · Sergey Levine · Chelsea Finn -
2021 : Reinforcement Learning as One Big Sequence Modeling Problem »
Michael Janner · Qiyang Li · Sergey Levine -
2021 : ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only Onboard Sensors »
Charles Sun · Jedrzej Orbik · Coline Devin · Abhishek Gupta · Glen Berseth · Sergey Levine -
2021 : Multi-Task Offline Reinforcement Learning with Conservative Data Sharing »
Tianhe (Kevin) Yu · Aviral Kumar · Yevgen Chebotar · Karol Hausman · Sergey Levine · Chelsea Finn -
2021 : Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Aaron Courville · Tengyu Ma · George Tucker · Sergey Levine -
2021 : Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Aaron Courville · Tengyu Ma · George Tucker · Sergey Levine -
2022 : Distributionally Adaptive Meta Reinforcement Learning »
Anurag Ajay · Dibya Ghosh · Sergey Levine · Pulkit Agrawal · Abhishek Gupta -
2023 : Neuro-Symbolic Models of Human Moral Judgment: LLMs as Automatic Feature Extractors »
joseph kwon · Sydney Levine · Josh Tenenbaum -
2023 : Neuro-Symbolic Models of Human Moral Judgment: LLMs as Automatic Feature Extractors »
joseph kwon · Sydney Levine · Josh Tenenbaum -
2023 : Neuro-Symbolic Models of Human Moral Judgment: LLMs as Automatic Feature Extractors »
joseph kwon · Sydney Levine · Josh Tenenbaum -
2023 : Building Community Driven Libraries of Natural Programs »
Leonardo Hernandez Cano · Yewen Pu · Robert Hawkins · Josh Tenenbaum · Armando Solar-Lezama -
2023 : Inferring the Future by Imagining the Past »
Kartik Chandra · Tony Chen · Tzu-Mao Li · Jonathan Ragan-Kelley · Josh Tenenbaum -
2023 : Inferring the Goals of Communicating Agents from Actions and Instructions »
Lance Ying · Tan Zhi-Xuan · Vikash Mansinghka · Josh Tenenbaum -
2023 : The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling probabilistic social inferences from linguistic inputs »
Lance Ying · Katie Collins · Megan Wei · Cedegao Zhang · Tan Zhi-Xuan · Adrian Weller · Josh Tenenbaum · Catherine Wong -
2023 : Inferring the Future by Imagining the Past »
Kartik Chandra · Tony Chen · Tzu-Mao Li · Jonathan Ragan-Kelley · Josh Tenenbaum -
2023 Oral: Inferring Relational Potentials in Interacting Systems »
Armand Comas · Yilun Du · Christian Fernandez Lopez · Sandesh Ghimire · Mario Sznaier · Josh Tenenbaum · Octavia Camps -
2023 Poster: On the Complexity of Bayesian Generalization »
Yu-Zhe Shi · Manjie Xu · John Hopcroft · Kun He · Josh Tenenbaum · Song-Chun Zhu · Ying Nian Wu · Wenjuan Han · Yixin Zhu -
2023 Poster: Inferring Relational Potentials in Interacting Systems »
Armand Comas · Yilun Du · Christian Fernandez Lopez · Sandesh Ghimire · Mario Sznaier · Josh Tenenbaum · Octavia Camps -
2023 Poster: Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC »
Yilun Du · Conor Durkan · Robin Strudel · Josh Tenenbaum · Sander Dieleman · Rob Fergus · Jascha Sohl-Dickstein · Arnaud Doucet · Will Grathwohl -
2023 Poster: Learning Neural Constitutive Laws from Motion Observations for Generalizable PDE Dynamics »
Pingchuan Ma · Peter Yichen Chen · Bolei Deng · Josh Tenenbaum · Tao Du · Chuang Gan · Wojciech Matusik -
2022 : Q/A Sergey Levine »
Sergey Levine -
2022 : Invited Speaker: Sergey Levine »
Sergey Levine -
2022 Poster: Offline Meta-Reinforcement Learning with Online Self-Supervision »
Vitchyr Pong · Ashvin Nair · Laura Smith · Catherine Huang · Sergey Levine -
2022 Poster: Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization »
Brandon Trabucco · Xinyang Geng · Aviral Kumar · Sergey Levine -
2022 Poster: How to Leverage Unlabeled Data in Offline Reinforcement Learning »
Tianhe (Kevin) Yu · Aviral Kumar · Yevgen Chebotar · Karol Hausman · Chelsea Finn · Sergey Levine -
2022 Poster: Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning »
Aviv Netanyahu · Tianmin Shu · Josh Tenenbaum · Pulkit Agrawal -
2022 Spotlight: How to Leverage Unlabeled Data in Offline Reinforcement Learning »
Tianhe (Kevin) Yu · Aviral Kumar · Yevgen Chebotar · Karol Hausman · Chelsea Finn · Sergey Levine -
2022 Spotlight: Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning »
Aviv Netanyahu · Tianmin Shu · Josh Tenenbaum · Pulkit Agrawal -
2022 Spotlight: Offline Meta-Reinforcement Learning with Online Self-Supervision »
Vitchyr Pong · Ashvin Nair · Laura Smith · Catherine Huang · Sergey Levine -
2022 Spotlight: Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization »
Brandon Trabucco · Xinyang Geng · Aviral Kumar · Sergey Levine -
2022 Poster: Streaming Inference for Infinite Feature Models »
Rylan Schaeffer · Yilun Du · Gabrielle K Liu · Ila R. Fiete -
2022 Spotlight: Streaming Inference for Infinite Feature Models »
Rylan Schaeffer · Yilun Du · Gabrielle K Liu · Ila R. Fiete -
2022 Poster: Learning Iterative Reasoning through Energy Minimization »
Yilun Du · Shuang Li · Josh Tenenbaum · Igor Mordatch -
2022 Poster: Offline RL Policies Should Be Trained to be Adaptive »
Dibya Ghosh · Anurag Ajay · Pulkit Agrawal · Sergey Levine -
2022 Poster: Prompting Decision Transformer for Few-Shot Policy Generalization »
Mengdi Xu · Yikang Shen · Shun Zhang · Yuchen Lu · Ding Zhao · Josh Tenenbaum · Chuang Gan -
2022 Poster: Lyapunov Density Models: Constraining Distribution Shift in Learning-Based Control »
Katie Kang · Paula Gradu · Jason Choi · Michael Janner · Claire Tomlin · Sergey Levine -
2022 Spotlight: Lyapunov Density Models: Constraining Distribution Shift in Learning-Based Control »
Katie Kang · Paula Gradu · Jason Choi · Michael Janner · Claire Tomlin · Sergey Levine -
2022 Oral: Offline RL Policies Should Be Trained to be Adaptive »
Dibya Ghosh · Anurag Ajay · Pulkit Agrawal · Sergey Levine -
2022 Spotlight: Learning Iterative Reasoning through Energy Minimization »
Yilun Du · Shuang Li · Josh Tenenbaum · Igor Mordatch -
2022 Spotlight: Prompting Decision Transformer for Few-Shot Policy Generalization »
Mengdi Xu · Yikang Shen · Shun Zhang · Yuchen Lu · Ding Zhao · Josh Tenenbaum · Chuang Gan -
2021 : Value-Based Deep Reinforcement Learning Requires Explicit Regularization »
Aviral Kumar · Rishabh Agarwal · Aaron Courville · Tengyu Ma · George Tucker · Sergey Levine -
2021 Poster: Simple and Effective VAE Training with Calibrated Decoders »
Oleh Rybkin · Kostas Daniilidis · Sergey Levine -
2021 Poster: WILDS: A Benchmark of in-the-Wild Distribution Shifts »
Pang Wei Koh · Shiori Sagawa · Henrik Marklund · Sang Michael Xie · Marvin Zhang · Akshay Balsubramani · Weihua Hu · Michihiro Yasunaga · Richard Lanas Phillips · Irena Gao · Tony Lee · Etienne David · Ian Stavness · Wei Guo · Berton Earnshaw · Imran Haque · Sara Beery · Jure Leskovec · Anshul Kundaje · Emma Pierson · Sergey Levine · Chelsea Finn · Percy Liang -
2021 Oral: WILDS: A Benchmark of in-the-Wild Distribution Shifts »
Pang Wei Koh · Shiori Sagawa · Henrik Marklund · Sang Michael Xie · Marvin Zhang · Akshay Balsubramani · Weihua Hu · Michihiro Yasunaga · Richard Lanas Phillips · Irena Gao · Tony Lee · Etienne David · Ian Stavness · Wei Guo · Berton Earnshaw · Imran Haque · Sara Beery · Jure Leskovec · Anshul Kundaje · Emma Pierson · Sergey Levine · Chelsea Finn · Percy Liang -
2021 Spotlight: Simple and Effective VAE Training with Calibrated Decoders »
Oleh Rybkin · Kostas Daniilidis · Sergey Levine -
2021 Poster: Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment »
Michael Chang · Sid Kaushik · Sergey Levine · Thomas Griffiths -
2021 Poster: Conservative Objective Models for Effective Offline Model-Based Optimization »
Brandon Trabucco · Aviral Kumar · Xinyang Geng · Sergey Levine -
2021 Spotlight: Conservative Objective Models for Effective Offline Model-Based Optimization »
Brandon Trabucco · Aviral Kumar · Xinyang Geng · Sergey Levine -
2021 Oral: Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment »
Michael Chang · Sid Kaushik · Sergey Levine · Thomas Griffiths -
2021 Poster: A large-scale benchmark for few-shot program induction and synthesis »
Ferran Alet · Javier Lopez-Contreras · James Koppel · Maxwell Nye · Armando Solar-Lezama · Tomas Lozano-Perez · Leslie Kaelbling · Josh Tenenbaum -
2021 Spotlight: A large-scale benchmark for few-shot program induction and synthesis »
Ferran Alet · Javier Lopez-Contreras · James Koppel · Maxwell Nye · Armando Solar-Lezama · Tomas Lozano-Perez · Leslie Kaelbling · Josh Tenenbaum -
2021 Poster: AGENT: A Benchmark for Core Psychological Reasoning »
Tianmin Shu · Abhishek Bhandwaldar · Chuang Gan · Kevin Smith · Shari Liu · Dan Gutfreund · Elizabeth Spelke · Josh Tenenbaum · Tomer Ullman -
2021 Spotlight: AGENT: A Benchmark for Core Psychological Reasoning »
Tianmin Shu · Abhishek Bhandwaldar · Chuang Gan · Kevin Smith · Shari Liu · Dan Gutfreund · Elizabeth Spelke · Josh Tenenbaum · Tomer Ullman -
2021 Poster: Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning »
Hiroki Furuta · Tatsuya Matsushima · Tadashi Kozuno · Yutaka Matsuo · Sergey Levine · Ofir Nachum · Shixiang Gu -
2021 Poster: MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning »
Kevin Li · Abhishek Gupta · Ashwin D Reddy · Vitchyr Pong · Aurick Zhou · Justin Yu · Sergey Levine -
2021 Poster: Improved Contrastive Divergence Training of Energy-Based Models »
Yilun Du · Shuang Li · Josh Tenenbaum · Igor Mordatch -
2021 Poster: PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning »
Angelos Filos · Clare Lyle · Yarin Gal · Sergey Levine · Natasha Jaques · Gregory Farquhar -
2021 Poster: Leveraging Language to Learn Program Abstractions and Search Heuristics »
Catherine Wong · Kevin Ellis · Josh Tenenbaum · Jacob Andreas -
2021 Spotlight: MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning »
Kevin Li · Abhishek Gupta · Ashwin D Reddy · Vitchyr Pong · Aurick Zhou · Justin Yu · Sergey Levine -
2021 Spotlight: Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning »
Hiroki Furuta · Tatsuya Matsushima · Tadashi Kozuno · Yutaka Matsuo · Sergey Levine · Ofir Nachum · Shixiang Gu -
2021 Spotlight: Leveraging Language to Learn Program Abstractions and Search Heuristics »
Catherine Wong · Kevin Ellis · Josh Tenenbaum · Jacob Andreas -
2021 Spotlight: Improved Contrastive Divergence Training of Energy-Based Models »
Yilun Du · Shuang Li · Josh Tenenbaum · Igor Mordatch -
2021 Oral: PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning »
Angelos Filos · Clare Lyle · Yarin Gal · Sergey Levine · Natasha Jaques · Gregory Farquhar -
2021 Poster: Amortized Conditional Normalized Maximum Likelihood: Reliable Out of Distribution Uncertainty Estimation »
Aurick Zhou · Sergey Levine -
2021 Poster: Model-Based Reinforcement Learning via Latent-Space Collocation »
Oleh Rybkin · Chuning Zhu · Anusha Nagabandi · Kostas Daniilidis · Igor Mordatch · Sergey Levine -
2021 Spotlight: Model-Based Reinforcement Learning via Latent-Space Collocation »
Oleh Rybkin · Chuning Zhu · Anusha Nagabandi · Kostas Daniilidis · Igor Mordatch · Sergey Levine -
2021 Spotlight: Amortized Conditional Normalized Maximum Likelihood: Reliable Out of Distribution Uncertainty Estimation »
Aurick Zhou · Sergey Levine -
2020 : Invited Talk 9: Prof. Sergey Levine from UC Berkeley »
Sergey Levine -
2020 Poster: Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions »
Michael Chang · Sid Kaushik · S. Matthew Weinberg · Thomas Griffiths · Sergey Levine -
2020 Poster: Learning Human Objectives by Evaluating Hypothetical Behavior »
Siddharth Reddy · Anca Dragan · Sergey Levine · Shane Legg · Jan Leike -
2020 Poster: Visual Grounding of Learned Physical Models »
Yunzhu Li · Toru Lin · Kexin Yi · Daniel Bear · Daniel Yamins · Jiajun Wu · Josh Tenenbaum · Antonio Torralba -
2020 Poster: Skew-Fit: State-Covering Self-Supervised Reinforcement Learning »
Vitchyr Pong · Murtaza Dalal · Steven Lin · Ashvin Nair · Shikhar Bahl · Sergey Levine -
2020 Poster: Can Autonomous Vehicles Identify, Recover From, and Adapt to Distribution Shifts? »
Angelos Filos · Panagiotis Tigas · Rowan McAllister · Nicholas Rhinehart · Sergey Levine · Yarin Gal -
2020 Poster: Cautious Adaptation For Reinforcement Learning in Safety-Critical Settings »
Jesse Zhang · Brian Cheung · Chelsea Finn · Sergey Levine · Dinesh Jayaraman -
2019 : Sergey Levine: "Imitation, Prediction, and Model-Based Reinforcement Learning for Autonomous Driving" »
Sergey Levine -
2019 : Sergey Levine: Unsupervised Reinforcement Learning and Meta-Learning »
Sergey Levine -
2019 Workshop: ICML Workshop on Imitation, Intent, and Interaction (I3) »
Nicholas Rhinehart · Sergey Levine · Chelsea Finn · He He · Ilya Kostrikov · Justin Fu · Siddharth Reddy -
2019 : Sergei Levine: Distribution Matching and Mutual Information in Reinforcement Learning »
Sergey Levine -
2019 Workshop: Generative Modeling and Model-Based Reasoning for Robotics and AI »
Aravind Rajeswaran · Emanuel Todorov · Igor Mordatch · William Agnew · Amy Zhang · Joelle Pineau · Michael Chang · Dumitru Erhan · Sergey Levine · Kimberly Stachenfeld · Marvin Zhang -
2019 Poster: Learning to Infer Program Sketches »
Maxwell Nye · Luke Hewitt · Josh Tenenbaum · Armando Solar-Lezama -
2019 Oral: Learning to Infer Program Sketches »
Maxwell Nye · Luke Hewitt · Josh Tenenbaum · Armando Solar-Lezama -
2019 Poster: Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables »
Kate Rakelly · Aurick Zhou · Chelsea Finn · Sergey Levine · Deirdre Quillen -
2019 Poster: Infinite Mixture Prototypes for Few-shot Learning »
Kelsey Allen · Evan Shelhamer · Hanul Shin · Josh Tenenbaum -
2019 Poster: SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning »
Marvin Zhang · Sharad Vikram · Laura Smith · Pieter Abbeel · Matthew Johnson · Sergey Levine -
2019 Oral: Infinite Mixture Prototypes for Few-shot Learning »
Kelsey Allen · Evan Shelhamer · Hanul Shin · Josh Tenenbaum -
2019 Oral: Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables »
Kate Rakelly · Aurick Zhou · Chelsea Finn · Sergey Levine · Deirdre Quillen -
2019 Oral: SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning »
Marvin Zhang · Sharad Vikram · Laura Smith · Pieter Abbeel · Matthew Johnson · Sergey Levine -
2019 Poster: Learning a Prior over Intent via Meta-Inverse Reinforcement Learning »
Kelvin Xu · Ellis Ratner · Anca Dragan · Sergey Levine · Chelsea Finn -
2019 Poster: EMI: Exploration with Mutual Information »
Hyoungseok Kim · Jaekyeom Kim · Yeonwoo Jeong · Sergey Levine · Hyun Oh Song -
2019 Poster: Task-Agnostic Dynamics Priors for Deep Reinforcement Learning »
Yilun Du · Karthik Narasimhan -
2019 Poster: Online Meta-Learning »
Chelsea Finn · Aravind Rajeswaran · Sham Kakade · Sergey Levine -
2019 Poster: Neurally-Guided Structure Inference »
Sidi Lu · Jiayuan Mao · Josh Tenenbaum · Jiajun Wu -
2019 Poster: Diagnosing Bottlenecks in Deep Q-learning Algorithms »
Justin Fu · Aviral Kumar · Matthew Soh · Sergey Levine -
2019 Oral: Neurally-Guided Structure Inference »
Sidi Lu · Jiayuan Mao · Josh Tenenbaum · Jiajun Wu -
2019 Oral: Learning a Prior over Intent via Meta-Inverse Reinforcement Learning »
Kelvin Xu · Ellis Ratner · Anca Dragan · Sergey Levine · Chelsea Finn -
2019 Oral: EMI: Exploration with Mutual Information »
Hyoungseok Kim · Jaekyeom Kim · Yeonwoo Jeong · Sergey Levine · Hyun Oh Song -
2019 Oral: Diagnosing Bottlenecks in Deep Q-learning Algorithms »
Justin Fu · Aviral Kumar · Matthew Soh · Sergey Levine -
2019 Oral: Task-Agnostic Dynamics Priors for Deep Reinforcement Learning »
Yilun Du · Karthik Narasimhan -
2019 Oral: Online Meta-Learning »
Chelsea Finn · Aravind Rajeswaran · Sham Kakade · Sergey Levine -
2019 Tutorial: Meta-Learning: from Few-Shot Learning to Rapid Reinforcement Learning »
Chelsea Finn · Sergey Levine -
2018 Invited Talk: Building Machines that Learn and Think Like People »
Josh Tenenbaum -
2018 Poster: Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor »
Tuomas Haarnoja · Aurick Zhou · Pieter Abbeel · Sergey Levine -
2018 Poster: Regret Minimization for Partially Observable Deep Reinforcement Learning »
Peter Jin · EECS Kurt Keutzer · Sergey Levine -
2018 Poster: The Mirage of Action-Dependent Baselines in Reinforcement Learning »
George Tucker · Surya Bhupatiraju · Shixiang Gu · Richard E Turner · Zoubin Ghahramani · Sergey Levine -
2018 Oral: Regret Minimization for Partially Observable Deep Reinforcement Learning »
Peter Jin · EECS Kurt Keutzer · Sergey Levine -
2018 Oral: Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor »
Tuomas Haarnoja · Aurick Zhou · Pieter Abbeel · Sergey Levine -
2018 Oral: The Mirage of Action-Dependent Baselines in Reinforcement Learning »
George Tucker · Surya Bhupatiraju · Shixiang Gu · Richard E Turner · Zoubin Ghahramani · Sergey Levine -
2018 Poster: Latent Space Policies for Hierarchical Reinforcement Learning »
Tuomas Haarnoja · Kristian Hartikainen · Pieter Abbeel · Sergey Levine -
2018 Poster: Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings »
John Co-Reyes · Yu Xuan Liu · Abhishek Gupta · Benjamin Eysenbach · Pieter Abbeel · Sergey Levine -
2018 Poster: Universal Planning Networks: Learning Generalizable Representations for Visuomotor Control »
Aravind Srinivas · Allan Jabri · Pieter Abbeel · Sergey Levine · Chelsea Finn -
2018 Oral: Universal Planning Networks: Learning Generalizable Representations for Visuomotor Control »
Aravind Srinivas · Allan Jabri · Pieter Abbeel · Sergey Levine · Chelsea Finn -
2018 Oral: Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings »
John Co-Reyes · Yu Xuan Liu · Abhishek Gupta · Benjamin Eysenbach · Pieter Abbeel · Sergey Levine -
2018 Oral: Latent Space Policies for Hierarchical Reinforcement Learning »
Tuomas Haarnoja · Kristian Hartikainen · Pieter Abbeel · Sergey Levine -
2017 : Lifelong Learning - Panel Discussion »
Sergey Levine · Joelle Pineau · Balaraman Ravindran · Andrei A Rusu -
2017 : Sergey Levine: Self-supervision as a path to lifelong learning »
Sergey Levine -
2017 Poster: Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning »
Yevgen Chebotar · Karol Hausman · Marvin Zhang · Gaurav Sukhatme · Stefan Schaal · Sergey Levine -
2017 Talk: Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning »
Yevgen Chebotar · Karol Hausman · Marvin Zhang · Gaurav Sukhatme · Stefan Schaal · Sergey Levine -
2017 Poster: Modular Multitask Reinforcement Learning with Policy Sketches »
Jacob Andreas · Dan Klein · Sergey Levine -
2017 Poster: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks »
Chelsea Finn · Pieter Abbeel · Sergey Levine -
2017 Poster: Reinforcement Learning with Deep Energy-Based Policies »
Tuomas Haarnoja · Haoran Tang · Pieter Abbeel · Sergey Levine -
2017 Talk: Modular Multitask Reinforcement Learning with Policy Sketches »
Jacob Andreas · Dan Klein · Sergey Levine -
2017 Talk: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks »
Chelsea Finn · Pieter Abbeel · Sergey Levine -
2017 Talk: Reinforcement Learning with Deep Energy-Based Policies »
Tuomas Haarnoja · Haoran Tang · Pieter Abbeel · Sergey Levine -
2017 Tutorial: Deep Reinforcement Learning, Decision Making, and Control »
Sergey Levine · Chelsea Finn