Timezone: »
Reinforcement learning (RL) provides a theoretical framework for continuously improving an agent's behavior via trial and error. However, efficiently learning policies from scratch can be very difficult, particularly for tasks that present exploration challenges. In such settings, it might be desirable to initialize RL with an existing policy, offline data, or demonstrations. However, naively performing such initialization in RL often works poorly, especially for value-based methods. In this paper, we present a meta algorithm that can use offline data, demonstrations, or a pre-existing policy to initialize an RL policy, and is compatible with any RL approach. In particular, we propose Jump-Start Reinforcement Learning (JSRL), an algorithm that employs two policies to solve tasks: a guide-policy, and an exploration-policy. By using the guide-policy to form a curriculum of starting states for the exploration-policy, we are able to efficiently improve performance on a set of simulated robotic tasks. We show via experiments that it is able to significantly outperform existing imitation and reinforcement learning algorithms, particularly in the small-data regime. In addition, we provide an upper bound on the sample complexity of JSRL and show that with the help of a guide-policy, one can improve the sample complexity for non-optimism exploration methods from exponential in horizon to polynomial.
Author Information
Ikechukwu Uchendu (Harvard University)
Ted Xiao (Google)
Yao Lu (Google Deepmind)
Banghua Zhu (University of California, Berkeley)
Mengyuan Yan (Google)
Joséphine Simon (Georgia Institute of Technology)
Matthew Bennice (Google)
Chuyuan Fu (Google)
Cong Ma (University of Chicago)
Jiantao Jiao (University of California, Berkeley)
Sergey Levine (University of Washington)
Karol Hausman (Google Brain)
More from the Same Authors
-
2021 : Reinforcement Learning as One Big Sequence Modeling Problem »
Michael Janner · Qiyang Li · Sergey Levine -
2021 : Intrinsic Control of Variational Beliefs in Dynamic Partially-Observed Visual Environments »
Nicholas Rhinehart · Jenny Wang · Glen Berseth · John Co-Reyes · Danijar Hafner · Chelsea Finn · Sergey Levine -
2021 : Explore and Control with Adversarial Surprise »
Arnaud Fickinger · Natasha Jaques · Samyak Parajuli · Michael Chang · Nicholas Rhinehart · Glen Berseth · Stuart Russell · Sergey Levine -
2022 : Effective Offline RL Needs Going Beyond Pessimism: Representations and Distributional Shift »
Xinyang Geng · Kevin Li · Abhishek Gupta · Aviral Kumar · Sergey Levine -
2022 : DASCO: Dual-Generator Adversarial Support Constrained Offline Reinforcement Learning »
Quan Vuong · Aviral Kumar · Sergey Levine · Yevgen Chebotar -
2022 : Distributionally Adaptive Meta Reinforcement Learning »
Anurag Ajay · Dibya Ghosh · Sergey Levine · Pulkit Agrawal · Abhishek Gupta -
2022 : You Only Live Once: Single-Life Reinforcement Learning via Learned Reward Shaping »
Annie Chen · Archit Sharma · Sergey Levine · Chelsea Finn -
2022 : Multimodal Masked Autoencoders Learn Transferable Representations »
Xinyang Geng · Hao Liu · Lisa Lee · Dale Schuurmans · Sergey Levine · Pieter Abbeel -
2023 : Deep Neural Networks Extrapolate Cautiously (Most of the Time) »
Katie Kang · Amrith Setlur · Claire Tomlin · Sergey Levine -
2023 : Offline Goal-Conditioned RL with Latent States as Actions »
Seohong Park · Dibya Ghosh · Benjamin Eysenbach · Sergey Levine -
2023 : Importance Weighted Actor-Critic for Optimal Conservative Offline Reinforcement Learning »
Hanlin Zhu · Paria Rashidinejad · Jiantao Jiao -
2023 : Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware »
Tony Zhao · Vikash Kumar · Sergey Levine · Chelsea Finn -
2023 : Training Diffusion Models with Reinforcement Learning »
Kevin Black · Michael Janner · Yilun Du · Ilya Kostrikov · Sergey Levine -
2023 : Training Diffusion Models with Reinforcement Learning »
Kevin Black · Michael Janner · Yilun Du · Ilya Kostrikov · Sergey Levine -
2023 : Video-Guided Skill Discovery »
Manan Tomar · Dibya Ghosh · Vivek Myers · Anca Dragan · Matthew Taylor · Philip Bachman · Sergey Levine -
2023 : Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning »
Mitsuhiko Nakamoto · Yuexiang Zhai · Anikait Singh · Max Sobol Mark · Yi Ma · Chelsea Finn · Aviral Kumar · Sergey Levine -
2023 : Training Diffusion Models with Reinforcement Learning »
Kevin Black · Michael Janner · Yilun Du · Ilya Kostrikov · Sergey Levine -
2023 : Principled Reinforcement Learning with Human Feedback from Pairwise or $K$-wise Comparisons »
Banghua Zhu · Michael Jordan · Jiantao Jiao -
2023 Poster: Online Learning in Stackelberg Games with an Omniscient Follower »
Geng Zhao · Banghua Zhu · Jiantao Jiao · Michael Jordan -
2023 Poster: A Connection between One-Step RL and Critic Regularization in Reinforcement Learning »
Benjamin Eysenbach · Matthieu Geist · Sergey Levine · Ruslan Salakhutdinov -
2023 Poster: Adversarial Policies Beat Superhuman Go AIs »
Tony Wang · Adam Gleave · Tom Tseng · Kellin Pelrine · Nora Belrose · Joseph Miller · Michael Dennis · Yawen Duan · Viktor Pogrebniak · Sergey Levine · Stuart Russell -
2023 Poster: Predictable MDP Abstraction for Unsupervised Model-Based RL »
Seohong Park · Sergey Levine -
2023 Poster: Principled Reinforcement Learning with Human Feedback from Pairwise or K-wise Comparisons »
Banghua Zhu · Michael Jordan · Jiantao Jiao -
2023 Oral: Adversarial Policies Beat Superhuman Go AIs »
Tony Wang · Adam Gleave · Tom Tseng · Kellin Pelrine · Nora Belrose · Joseph Miller · Michael Dennis · Yawen Duan · Viktor Pogrebniak · Sergey Levine · Stuart Russell -
2023 Poster: Reinforcement Learning from Passive Data via Latent Intentions »
Dibya Ghosh · Chethan Bhateja · Sergey Levine -
2023 Poster: Understanding the Complexity Gains of Single-Task RL with a Curriculum »
Qiyang Li · Yuexiang Zhai · Yi Ma · Sergey Levine -
2023 Oral: Reinforcement Learning from Passive Data via Latent Intentions »
Dibya Ghosh · Chethan Bhateja · Sergey Levine -
2023 Poster: PaLM-E: An Embodied Multimodal Language Model »
Danny Driess · Fei Xia · Mehdi S. M. Sajjadi · Corey Lynch · Aakanksha Chowdhery · Brian Ichter · Ayzaan Wahid · Jonathan Tompson · Quan Vuong · Tianhe (Kevin) Yu · Wenlong Huang · Yevgen Chebotar · Pierre Sermanet · Daniel Duckworth · Sergey Levine · Vincent Vanhoucke · Karol Hausman · Marc Toussaint · Klaus Greff · Andy Zeng · Igor Mordatch · Pete Florence -
2023 Poster: Efficient Online Reinforcement Learning with Offline Data »
Philip Ball · Laura Smith · Ilya Kostrikov · Sergey Levine -
2023 Poster: The Power of Preconditioning in Overparameterized Low-Rank Matrix Sensing »
Xingyu Xu · Yandi Shen · Yuejie Chi · Cong Ma -
2022 : Multimodal Masked Autoencoders Learn Transferable Representations »
Xinyang Geng · Hao Liu · Lisa Lee · Dale Schuurmans · Sergey Levine · Pieter Abbeel -
2021 Poster: Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills »
Yevgen Chebotar · Karol Hausman · Yao Lu · Ted Xiao · Dmitry Kalashnikov · Jacob Varley · Alexander Irpan · Benjamin Eysenbach · Ryan C Julian · Chelsea Finn · Sergey Levine -
2021 Spotlight: Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills »
Yevgen Chebotar · Karol Hausman · Yao Lu · Ted Xiao · Dmitry Kalashnikov · Jacob Varley · Alexander Irpan · Benjamin Eysenbach · Ryan C Julian · Chelsea Finn · Sergey Levine -
2019 Poster: Theoretically Principled Trade-off between Robustness and Accuracy »
Hongyang Zhang · Yaodong Yu · Jiantao Jiao · Eric Xing · Laurent El Ghaoui · Michael Jordan -
2019 Oral: Theoretically Principled Trade-off between Robustness and Accuracy »
Hongyang Zhang · Yaodong Yu · Jiantao Jiao · Eric Xing · Laurent El Ghaoui · Michael Jordan