Timezone: »

End-to-End Learning for the Deep Multivariate Probit Model
Di Chen · Yexiang Xue · Carla Gomes

Thu Jul 12 09:15 AM -- 12:00 PM (PDT) @ Hall B #47

The multivariate probit model (MVP) is a popular classic model for studying binary responses of multiple entities. Nevertheless, the computational challenge of learning the MVP model, given that its likelihood involves integrating over a multidimensional constrained space of latent variables, significantly limits its application in practice. We propose a flexible deep generalization of the classic MVP, the Deep Multivariate Probit Model (DMVP), which is an end-to-end learning scheme that uses an efficient parallel sampling process of the multivariate probit model to exploit GPU-boosted deep neural networks. We present both theoretical and empirical analysis of the convergence behavior of DMVP's sampling process with respect to the resolution of the correlation structure. We provide convergence guarantees for DMVP and our empirical analysis demonstrates the advantages of DMVP's sampling compared with standard MCMC-based methods. We also show that when applied to multi-entity modelling problems, which are natural DMVP applications, DMVP trains faster than classical MVP , by at least an order of magnitude, captures rich correlations among entities, and further improves the joint likelihood of entities compared with several competitive models.

Author Information

Di Chen (Cornell University)
Yexiang Xue (Purdue University)
Carla Gomes (Cornell University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors