Timezone: »
The multivariate probit model (MVP) is a popular classic model for studying binary responses of multiple entities. Nevertheless, the computational challenge of learning the MVP model, given that its likelihood involves integrating over a multidimensional constrained space of latent variables, significantly limits its application in practice. We propose a flexible deep generalization of the classic MVP, the Deep Multivariate Probit Model (DMVP), which is an end-to-end learning scheme that uses an efficient parallel sampling process of the multivariate probit model to exploit GPU-boosted deep neural networks. We present both theoretical and empirical analysis of the convergence behavior of DMVP's sampling process with respect to the resolution of the correlation structure. We provide convergence guarantees for DMVP and our empirical analysis demonstrates the advantages of DMVP's sampling compared with standard MCMC-based methods. We also show that when applied to multi-entity modelling problems, which are natural DMVP applications, DMVP trains faster than classical MVP , by at least an order of magnitude, captures rich correlations among entities, and further improves the joint likelihood of entities compared with several competitive models.
Author Information
Di Chen (Cornell University)
Yexiang Xue (Purdue University)
Carla Gomes (Cornell University)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: End-to-End Learning for the Deep Multivariate Probit Model »
Thu Jul 12th 09:50 -- 10:00 AM Room A5
More from the Same Authors
-
2020 Poster: Deep Reasoning Networks for Unsupervised Pattern De-mixing with Constraint Reasoning »
Di Chen · Yiwei Bai · Wenting Zhao · Sebastian Ament · John Gregoire · Carla Gomes