Testing Bayesian Networks (TBNs) were introduced recently to represent a set of distributions, one of which is selected based on the given evidence and used for reasoning. TBNs are more expressive than classical Bayesian Networks (BNs): Marginal queries correspond to multi-linear functions in BNs and to piecewise multi-linear functions in TBNs. Moreover, marginal TBN queries are universal approximators, like neural networks. In this paper, we study conditional independence in TBNs, showing that it can be inferred from d-separation as in BNs. We also study the role of TBN expressiveness and independence in dealing with the problem of learning using incomplete models (i.e., ones that are missing nodes or edges from the data-generating model). Finally, we illustrate our results on a number of concrete examples, including a case study on (high order) Hidden Markov Models.
Yujia Shen (UCLA)
Haiying Huang (UCLA)
Arthur Choi (UCLA)
Adnan Darwiche (UCLA)
Related Events (a corresponding poster, oral, or spotlight)
2019 Poster: Conditional Independence in Testing Bayesian Networks »
Thu Jun 13th 01:30 -- 04:00 AM Room Pacific Ballroom