Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling

Uncovering Latent Structure Using Random Partition Models

Thomas Sutter · Alain Ryser · Joram Liebeskind · Julia Vogt

Keywords: [ Generative Models ] [ Representation Learning ] [ weak supervision ] [ VAE ] [ random partition model ] [ reparameterization ] [ continuous relaxation ] [ Deep Learning ] [ variational clustering ]


Abstract:

Partitioning a set of elements into an unknown number of mutually exclusive subsets is essential in many machine learning problems.However, assigning elements, such as samples in a dataset or neurons in a network layer, to an unknown and discrete number of subsets is inherently non-differentiable, prohibiting end-to-end gradient-based optimization of parameters.We overcome this limitation by proposing a novel two-step method for inferring partitions, which allows its usage in variational inference tasks.This new approach enables reparameterized gradients with respect to the parameters of the new random partition model.Our method works by inferring the number of elements per subset and, second, by filling these subsets in a learned order.We highlight the versatility of our general-purpose approach on two different challenging experiments: variational clustering and inference of shared and independent generative factors under weak supervision.

Chat is not available.