Poster
Zero-Inflated Exponential Family Embeddings
Liping Liu · David Blei
Gallery #113
Word embeddings are a widely-used tool to analyze language, and exponential family embeddings (Rudolph et al., 2016) generalize the technique to other types of data. One challenge to fitting embedding methods is sparse data, such as a document/term matrix that contains many zeros. To address this issue, practitioners typically downweight or subsample the zeros, thus focusing learning on the non-zero entries. In this paper, we develop zero-inflated embeddings, a new embedding method that is designed to learn from sparse observations. In a zero-inflated embedding (ZIE), a zero in the data can come from an interaction to other data (i.e., an embedding) or from a separate process by which many observations are equal to zero (i.e. a probability mass at zero). Fitting a ZIE naturally downweights the zeros and dampens their influence on the model. Across many types of data---language, movie ratings, shopping histories, and bird watching logs---we found that zero-inflated embeddings provide improved predictive performance over standard approaches and find better vector representation of items.
Live content is unavailable. Log in and register to view live content