Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Machine Learning for Multimodal Healthcare Data

Neural Graph Revealers

Harsh Shrivastava · Urszula Chajewska

Keywords: [ Data sparsity, incompleteness and complexity ] [ Multimodal fusion ] [ Electronic healthcare records ] [ Multimodal biomarkers ]


Abstract:

Sparse graph recovery methods work well where the data follows their assumptions but often they are not designed for doing downstream probabilistic queries. This limits their adoption to only identifying connections among domain variables. On the other hand, the Probabilistic Graphical Models (PGMs) learn an underlying base graph between variables together with a distribution over them. PGM design choices are carefully made such that the inference & sampling algorithms are efficient. This brings in certain restrictions and often simplifying assumptions. In this work, we propose Neural Graph Revealers (NGRs), that are an attempt to efficiently merge the sparse graph recovery methods with PGMs into a single flow. The problem setting consists of an input data X with D features and M samples and the task is to recover a sparse graph showing connections between the features and learn a probability distribution over features D at the same time. NGRs view the neural networks as a `glass box' or more specifically as a multitask learning framework. We introduce 'graph-constrained path norm' that NGRs leverage to learn a graphical model that captures complex non-linear functional dependencies between features in the form of an undirected sparse graph. Furthermore, NGRs can handle multimodal inputs like images, text, categorical data, embeddings etc. which is not straightforward to incorporate in the existing methods. We show experimental results on data from Gaussian graphical models and a multimodal infant mortality dataset by CDC.

Chat is not available.