Skip to yearly menu bar Skip to main content


Poster

Energy-Guided Diffusion Sampling for Offline-to-Online Reinforcement Learning

Xu-Hui Liu · Tian-Shuo Liu · Shengyi Jiang · Ruifeng Chen · Zhilong Zhang · Xinwei Chen · Yang Yu


Abstract:

Existing methods replay offline data directly in the online phase, resulting in a significant challenge of data distribution shift and subsequently causing inefficiency in online fine-tuning. To address this issue, we introduce an innovative approach, Energy-guided DIffusion Sampling (EDIS), which utilizes a diffusion model to extract prior knowledge from the offline dataset and employs energy functions to distill this knowledge for enhanced data generation in the online phase. The generated samples confirm online fine-tuning distribution without oblivion of transition fidelity.The theoretical analysis shows that EDIS exhibits reduced suboptimality compared to solely utilizing online data or directly replaying offline data. EDIS is a plug-in approach and can be combined with existing methods in offline-to-online settings. By implementing EDIS to off-the-shelf methods Cal-QL and IQL, we observe a notable 20\% average improvement in empirical performance on MuJoCo, AntMaze, and Adroit environments.

Live content is unavailable. Log in and register to view live content