Skip to yearly menu bar Skip to main content


Tutorial

Flowing Through Continuous-Time Generative Models: A Clear and Systematic Tour

qiang liu


Abstract:

Continuous-time generative models—particularly diffusion- and flow-based models—have emerged as a dominant paradigm in generative AI, with applications in image, video, molecular, and audio synthesis, as well as scientific modeling. Despite their success, the field’s rich mathematical structure, varied terminology, and subtle theoretical foundations often lead to confusion and fragmented understanding.

This tutorial offers a clear, unified, and accessible introduction to continuous-time generative models. Beginning with the simplified lens of rectified flow, we build a streamlined conceptual framework to support systematic exploration of the algorithmic landscape, while minimizing unnecessary mathematical overhead. We clarify commonly confused ideas and untangle key relationships—such as flow vs. diffusion, and the interplay between interpolation, noise schedules, and samplers. We also touch on advanced topics including distillation, control, and discrete and constrained generation in flow and diffusion models.

Live content is unavailable. Log in and register to view live content