Skip to yearly menu bar Skip to main content


Poster

Synthetic Data, Real Errors: How (Not) to Publish and Use Synthetic Data

Boris van Breugel · Zhaozhi Qian · Mihaela van der Schaar

Exhibit Hall 1 #306
[ ]
[ PDF [ Poster

Abstract:

Generating synthetic data through generative models is gaining interest in the ML community and beyond, promising a future where datasets can be tailored to individual needs. Unfortunately, synthetic data is usually not perfect, resulting in potential errors in downstream tasks. In this work we explore how the generative process affects the downstream ML task. We show that the naive synthetic data approach---using synthetic data as if it is real---leads to downstream models and analyses that do not generalize well to real data. As a first step towards better ML in the synthetic data regime, we introduce Deep Generative Ensemble (DGE)---a framework inspired by Deep Ensembles that aims to implicitly approximate the posterior distribution over the generative process model parameters. DGE improves downstream model training, evaluation, and uncertainty quantification, vastly outperforming the naive approach on average. The largest improvements are achieved for minority classes and low-density regions of the original data, for which the generative uncertainty is largest.

Chat is not available.