Timezone: »

Synthetic Data, Real Errors: How (Not) to Publish and Use Synthetic Data
Boris van Breugel · Zhaozhi Qian · Mihaela van der Schaar

Wed Jul 26 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #306

Generating synthetic data through generative models is gaining interest in the ML community and beyond, promising a future where datasets can be tailored to individual needs. Unfortunately, synthetic data is usually not perfect, resulting in potential errors in downstream tasks. In this work we explore how the generative process affects the downstream ML task. We show that the naive synthetic data approach---using synthetic data as if it is real---leads to downstream models and analyses that do not generalize well to real data. As a first step towards better ML in the synthetic data regime, we introduce Deep Generative Ensemble (DGE)---a framework inspired by Deep Ensembles that aims to implicitly approximate the posterior distribution over the generative process model parameters. DGE improves downstream model training, evaluation, and uncertainty quantification, vastly outperforming the naive approach on average. The largest improvements are achieved for minority classes and low-density regions of the original data, for which the generative uncertainty is largest.

Author Information

Boris van Breugel (University of Cambridge)
Zhaozhi Qian (University of Cambridge)
Mihaela van der Schaar (University of Cambridge and UCLA)

More from the Same Authors