Timezone: »

Online Decision Transformer
Qinqing Zheng · Amy Zhang · Aditya Grover

Thu Jul 21 01:15 PM -- 01:35 PM (PDT) @ Room 309

Recent work has shown that offline reinforcement learning (RL) can be formulated as a sequence modeling problem~\cite{chen2021decision, janner2021offline} and solved via approaches similar to large-scale language modeling. However, any practical instantiation of RL also involves an online component, where policies pretrained on passive offline datasets are finetuned via task-specific interactions with the environment. We propose Online Decision Transformers (ODT), an RL algorithm based on sequence modeling that blends offline pretraining with online finetuning in a unified framework. Our framework uses sequence-level entropy regularizers in conjunction with autoregressive modeling objectives for sample-efficient exploration and finetuning. Empirically, we show that ODT is competitive with the state-of-the-art in absolute performance on the D4RL benchmark but shows much more significant gains during the finetuning procedure.

Author Information

Qinqing Zheng (Meta AI Research)
Amy Zhang (FAIR / UC Berkeley)
Aditya Grover (UCLA)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors