Timezone: »

Latent Feature Lasso
En-Hsu Yen · Wei-Cheng Lee · Sung-En Chang · Arun Suggala · Shou-De Lin · Pradeep Ravikumar

Sun Aug 06 06:06 PM -- 06:24 PM (PDT) @ C4.4

The latent feature model (LFM), proposed in \cite{griffiths2005infinite}, but possibly with earlier origins, is a generalization of a mixture model, where each instance is generated not from a single latent class but from a combination of \emph{latent features}. Thus, each instance has an associated latent binary feature incidence vector indicating the presence or absence of a feature. Due to its combinatorial nature, inference of LFMs is considerably intractable, and accordingly, most of the attention has focused on nonparametric LFMs, with priors such as the Indian Buffet Process (IBP) on infinite binary matrices. Recent efforts to tackle this complexity either still have computational complexity that is exponential, or sample complexity that is high-order polynomial w.r.t. the number of latent features. In this paper, we address this outstanding problem of tractable estimation of LFMs via a novel atomic-norm regularization, which gives an algorithm with polynomial run-time and sample complexity without impractical assumptions on the data distribution.

Author Information

En-Hsu Yen (Carnegie Mellon University)

I am currently a PhD student in the Computer Science School of Carnegie Mellon University (Machine Learning Department), working with Pradeep Ravikumar and Inderjit Dhillon. I received my B.S./B.B.A/M.S. from CSIE/IM departments of National Taiwan University, where I worked with Shou-De Lin. My research focuses on Large-Scale Machine Learning, Convex Optimization and their applications.

Wei-Cheng Lee (National Taiwan University)
Sung-En Chang (National Taiwan University)
Arun Suggala (Carnegie Mellon University)
Shou-De Lin (National Taiwan University)
Pradeep Ravikumar (Carnegie Mellon University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors