Foundations of Deep Generative Models: Understanding Memorization, Generalization, and Reasoning
Abstract
Recently, diffusion models, flow-based models, and autoregressive language models have emerged as a powerful class of deep generative models (DGMs) with remarkable generation capabilities across a wide range of applications, including image synthesis, video generation, natural language generation, and scientific discovery. Despite these successes, they still face significant challenges, particularly in understanding memorization, generalization, and reasoning, which limit their reliability, interpretability, and broader adoption in many scientific disciplines. This workshop will bring together researchers from both theoretical and applied communities to address these challenges, providing a focused forum for exchanging ideas, identifying key open problems, and fostering new collaborations in this rapidly evolving area.