Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling

Your Diffusion Model is Secretly a Zero-Shot Classifier

Alexander Li · Mihir Prabhudesai · Shivam Duggal · Ellis Brown · Deepak Pathak

Keywords: [ generative model ] [ Diffusion Model ] [ Zero-shot ] [ inference ] [ Classification ]


Abstract:

The recent wave of large-scale text-to-image diffusion models has dramatically increased our text-based image generation abilities. However, almost all use cases so far have solely focused on sampling. In this paper, we show that the density estimates from large-scale text-to-image diffusion models like Stable Diffusion can be leveraged to perform zero-shot classification without any additional training. Our generative approach to classification, which we call Diffusion Classifier, attains strong results on a variety of benchmarks and outperforms alternative methods of extracting knowledge from diffusion models. We also find that our diffusion-based approach has stronger multimodal relational reasoning abilities than competing discriminative approaches. Finally, we use Diffusion Classifier to extract standard classifiers from class-conditional diffusion models trained on ImageNet. Even though these models are trained with weak augmentations and no regularization, they approach the performance of SOTA discriminative classifiers. Overall, our results are a step toward using generative over discriminative models for downstream tasks

Chat is not available.