Timezone: »
Tractable probabilistic models obviate the need for unreliable approximate inference approaches and as a result often yield accurate query answers in practice. However, most tractable models that achieve state-of-the-art generalization performance (measured using test set likelihood score) use latent variables. Such models admit poly-time marginal (MAR) inference but do not admit poly-time (full) maximum-a-posteriori (MAP) inference. To address this problem, in this paper, we propose a novel approach for inducing cutset networks, a well-known tractable representation that does not use latent variables and therefore admits linear time exact MAR and MAP inference. Our approach addresses a major limitation of existing techniques that learn cutset networks from data in that their accuracy is quite low as compared to latent models such as sum-product networks and bags of cutset networks. The key idea in our approach is to construct deep cutset networks by not only learning them from data but also compiling them from a more accurate latent tractable model. We show experimentally that our new approach yields more accurate MAP estimates as compared with existing approaches. Moreover, our new approach significantly improves the test set log-likelihood score of cutset networks bringing them closer in terms of generalization performance to latent models.
Author Information
Tahrima Rahman (University of Texas at Dallas)
Shasha Jin (The University of Texas at Dallas)
Vibhav Gogate (The University of Texas at Dallas)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Look Ma, No Latent Variables: Accurate Cutset Networks via Compilation »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #132
More from the Same Authors
-
2019 Workshop: The Third Workshop On Tractable Probabilistic Modeling (TPM) »
Pedro Domingos · Daniel Lowd · Tahrima Rahman · Antonio Vergari · Alejandro Molina · Antonio Vergari