Poster
in
Workshop: Foundations of Reinforcement Learning and Control: Connections and Perspectives
Model Based Diffusion for Trajectory Optimization
Chaoyi Pan · Zeji Yi · Guanya Shi · Guannan Qu
Abstract:
Recent advances in diffusion models have demonstrated their strong capabilities in generating high-fidelity samples from complex distributions through an iterative refinement process. Despite the empirical success of diffusion models in motion planning and control, the model-free nature of these methods does not leverage readily available model information and limits their generalization to new scenarios beyond the training data (e.g., new robots with different dynamics). In this work, we introduce $ \underline{M}$odel-$\underline{B}$ased $\underline{D}$iffusion (MBD), an optimization approach using the diffusion process to solve trajectory optimization (TO) problems $\bf{without data}$. The key idea is to explicitly compute the score function by leveraging the model information in TO problems, which is why we refer to our approach as $\bf{model-based}$ diffusion. Moreover, although MBD does not require external data, it can be naturally integrated with data of diverse qualities to steer the diffusion process. We also reveal that MBD has interesting connections to sampling-based optimization. Empirical evaluations show that MBD outperforms state-of-the-art reinforcement learning and sampling-based TO methods in challenging contact-rich tasks. Additionally, MBD's ability to integrate with data enhances its versatility and practical applicability, even with imperfect and infeasible data (e.g., partial-state demonstrations for high-dimensional humanoids), beyond the scope of standard diffusion models. Videos and codes: \url{https://lecar-lab.github.io/mbd/}
Chat is not available.