Poster
Random Function Priors for Correlation Modeling
Aonan Zhang · John Paisley
Pacific Ballroom #222
Keywords: [ Approximate Inference ] [ Bayesian Deep Learning ] [ Bayesian Methods ] [ Bayesian Nonparametrics ] [ Generative Models ]
[
Abstract
]
Abstract:
The likelihood model of high dimensional data $X_n$ can often be expressed as $p(X_n|Z_n,\theta)$, where $\theta\mathrel{\mathop:}=(\theta_k)_{k\in[K]}$ is a collection of hidden features shared across objects, indexed by $n$, and $Z_n$ is a non-negative factor loading vector with $K$ entries where $Z_{nk}$ indicates the strength of $\theta_k$ used to express $X_n$. In this paper, we introduce random function priors for $Z_n$ for modeling correlations among its $K$ dimensions $Z_{n1}$ through $Z_{nK}$, which we call \textit{population random measure embedding} (PRME). Our model can be viewed as a generalized paintbox model~\cite{Broderick13} using random functions, and can be learned efficiently with neural networks via amortized variational inference. We derive our Bayesian nonparametric method by applying a representation theorem on separately exchangeable discrete random measures.
Live content is unavailable. Log in and register to view live content