Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling
Hurdle Conjugate Priors for Scalable Tucker Decomposition
John Hood · Aaron Schein
Keywords: [ tucker ] [ Tensor decomposition ] [ bayesian ] [ l0 regularization ]
This paper introduces a novel inference scheme for a class of hurdle priors that exploits sparsity to scale large machine learning models with convolution-closed likelihood distributions, such as the Gaussian and Poisson. We call this the convolution-closed hurdle motif, and focus on the non-negative Tucker decomposition, a tool popular in the literature for modeling multi-way relational data. We apply an instance of the class of hurdle priors, the hurdle gamma prior, to a probabilistic non-negative Tucker method and derive an inference scheme that scales with only the non-zero latent parameters in the core tensor. This scheme avoids the typical exponential blowup in computational cost present in Tucker decomposition, efficiently fitting the data to a high-dimensional latent space. We derive and implement a closed-form Gibbs sampler for full posterior inference and fit our model to longitudinal microbiome data. Using this hurdle motif to quickly train our model, we reveal interpretable qualitative structure and encouraging classification results.