Skip to yearly menu bar Skip to main content


Oral

Stochastic Blockmodels meet Graph Neural Networks

Nikhil Mehta · Lawrence Carin · Piyush Rai

Abstract: Stochastic blockmodels (SBM) and their variants, $e.g.$, mixed-membership and overlapping stochastic blockmodels, are latent variable models for graphs. Such methods have proven to be successful for multiple tasks, including discovering the community structure and link prediction on graph-structured data. Recently, graph neural networks, $e.g.$, graph convolutional networks, have also emerged as a promising approach to learn powerful representations (embeddings) for the nodes in the graph, by exploiting various graph properties such as locality and invariance. In this work, we unify these two directions by developing a novel, \emph{sparse} variational autoencoder for graphs, that retains the nice interpretability properties of SBMs, while also enjoying the excellent predictive performance of graph neural nets. Moreover, our framework is accompanied by a fast \emph{recognition model} that enables fast inference of the node embeddings (which are of independent interest for inference in traditional SBMs). Although we develop this framework for a particular type of SBM, namely the \emph{overlapping} stochastic blockmodel, the proposed framework can be adapted readily for other types of SBMs as well. Experimental results on several benchmarks datasets demonstrate that our model outperforms various state-of-the-art methods, for community discovery and link prediction. For reproducibility, the code is shared in supplementary material and it will be made public in the final version.

Chat is not available.