Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Challenges in Deployable Generative AI

Using Synthetic Data for Data Augmentation to Improve Classification Accuracy

Yongchao Zhou · Hshmat Sahak · Jimmy Ba

Keywords: [ generative model ] [ Diffusion Model ] [ synthetic data ] [ data augmentation ] [ image classification ] [ model inversion ]


Abstract:

Obtaining high quality data for training classification models is challenging when sufficient data covering the real manifold is difficult to find in the wild. In this paper, we present Diffusion Inversion, a dataset-agnostic augmentation strategy for training classification models. Diffusion Inversion is a simple yet effective method that leverages the powerful pretrained Stable Diffusion model to generate synthetic datasets that ensure coverage of the original data manifold while also generating novel samples that extrapolate the training domain to allow for better generalization. We ensure data coverage by inverting each image in the original set to its condition vector in the latent space of Stable Diffusion. We ensure sample diversity by adding noise to the learned embeddings or performing interpolation in the latentspace, and using the new vector as the conditioning signal. The method produces high-quality and diverse samples, consistently outperforming generic prompt-based steering methods and KNN retrieval baselines across a wide range of common and specialized datasets. Furthermore, we demonstrate the compatibility of our approach with widely-used data augmentation techniques, and assess the reliability of the generated data in both supporting various neural architectures and enhancing few-shot learning performance.

Chat is not available.