Skip to yearly menu bar Skip to main content


Poster

Neural Clustering Processes

Ari Pakman · Yueqi Wang · Catalin Mitelut · JinHyung Lee · Department of Statistics Liam Paninski

Keywords: [ Deep Learning - General ] [ Neuroscience and Cognitive Science ] [ Clustering ] [ Architectures ] [ Approximate Inference ]


Abstract:

Probabilistic clustering models (or equivalently, mixture models) are basic building blocks in countless statistical models and involve latent random variables over discrete spaces. For these models, posterior inference methods can be inaccurate and/or very slow. In this work we introduce deep network architectures trained with labeled samples from any generative model of clustered datasets. At test time, the networks generate approximate posterior samples of cluster labels for any new dataset of arbitrary size. We develop two complementary approaches to this task, requiring either O(N) or O(K) network forward passes per dataset, where N is the dataset size and K the number of clusters. Unlike previous approaches, our methods sample the labels of all the data points from a well-defined posterior, and can learn nonparametric Bayesian posteriors since they do not limit the number of mixture components. As a scientific application, we present a novel approach to neural spike sorting for high-density multielectrode arrays.

Chat is not available.