Timezone: »
Diffusion models have shown remarkable performance in modeling data distributions and synthesizing data. The vanilla diffusion model typically requires complete or fully observed training data, while incomplete data is a common issue in various real-world applications, particularly in tabular data. This work presents a unified and principled diffusion-based framework for learning from data with missing values under various missing mechanisms. We first observe that the widely adopted "impute-then-generate" pipeline may lead to a biased learning objective. Then we propose to mask the regression loss of Denoising Score Matching in the training phase. We show that the proposed method is consistent in learning the score of data distributions, and the training objective serves as an upper bound for the negative likelihood in certain cases. The proposed framework is evaluated on multiple tabular datasets using realistic and efficacious metrics. It is demonstrated to outperform several baseline methods by a large margin.
Author Information
Yidong Ouyang (CUHKSZ)
Liyan Xie (The Chinese University of Hong Kong, Shenzhen)
Chongxuan Li (Tsinghua University)
Guang Cheng (University of California, Los Angeles)
More from the Same Authors
-
2023 Poster: Towards Understanding Generalization of Macro-AUC in Multi-label Learning »
Guoqiang Wu · Chongxuan Li · Yilong Yin -
2023 Poster: Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning »
Cheng Lu · Huayu Chen · Jianfei Chen · Hang Su · Chongxuan Li · Jun Zhu -
2023 Poster: Improving Adversarial Robustness Through the Contrastive-Guided Diffusion Process »
Yidong Ouyang · Liyan Xie · Guang Cheng -
2023 Poster: Revisiting Discriminative vs. Generative Classifiers: Theory and Implications »
Chenyu Zheng · Guoqiang Wu · Fan Bao · Yue Cao · Chongxuan Li · Jun Zhu -
2023 Poster: One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale »
Fan Bao · Shen Nie · Kaiwen Xue · Chongxuan Li · Shi Pu · Yaole Wang · Gang Yue · Yue Cao · Hang Su · Jun Zhu -
2022 Poster: Maximum Likelihood Training for Score-based Diffusion ODEs by High Order Denoising Score Matching »
Cheng Lu · Kaiwen Zheng · Fan Bao · Jianfei Chen · Chongxuan Li · Jun Zhu -
2022 Poster: Fast Lossless Neural Compression with Integer-Only Discrete Flows »
Siyu Wang · Jianfei Chen · Chongxuan Li · Jun Zhu · Bo Zhang -
2022 Spotlight: Fast Lossless Neural Compression with Integer-Only Discrete Flows »
Siyu Wang · Jianfei Chen · Chongxuan Li · Jun Zhu · Bo Zhang -
2022 Spotlight: Maximum Likelihood Training for Score-based Diffusion ODEs by High Order Denoising Score Matching »
Cheng Lu · Kaiwen Zheng · Fan Bao · Jianfei Chen · Chongxuan Li · Jun Zhu -
2022 Poster: Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models »
Fan Bao · Chongxuan Li · Jiacheng Sun · Jun Zhu · Bo Zhang -
2022 Spotlight: Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models »
Fan Bao · Chongxuan Li · Jiacheng Sun · Jun Zhu · Bo Zhang -
2021 Poster: Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models »
Fan Bao · Kun Xu · Chongxuan Li · Lanqing Hong · Jun Zhu · Bo Zhang -
2021 Spotlight: Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models »
Fan Bao · Kun Xu · Chongxuan Li · Lanqing Hong · Jun Zhu · Bo Zhang -
2020 Poster: Understanding and Stabilizing GANs' Training Dynamics Using Control Theory »
Kun Xu · Chongxuan Li · Jun Zhu · Bo Zhang