Poster
When Diffusion Models Memorize: Inductive Biases in Probability Flow of Minimum-Norm Shallow Neural Nets
Chen Zeno · Hila Manor · Gregory Ongie · Nir Weinberger · Tomer Michaeli · Daniel Soudry
East Exhibition Hall A-B #E-2109
Diffusion models are a popular type of generative AI that create realistic images through a gradual process of refining random noise into structure. Although they perform remarkably well, researchers still don’t fully understand why these models are so effective. A central open question is whether diffusion models are simply memorizing training images or generating new ones by blending features from multiple examples.To explore this, we study a simplified version of a diffusion model using small neural networks. We examine how these models behave over time, observing whether they return to exact training examples or converge on new, intermediate points that mix features from several images. Our findings show that both memorization and creative generalization can occur, depending on how long the generation process is allowed to run.These insights help explain how diffusion models can produce both familiar-looking and entirely novel images, and offer a better understanding of the trade-offs in their behavior.
Live content is unavailable. Log in and register to view live content