Timezone: »
Unsupervised learning from a continuous stream of data is arguably one of the most common and most challenging problems facing intelligent agents. One class of unsupervised models, collectively termed \textit{feature models}, attempts unsupervised discovery of latent features underlying the data and includes common models such as PCA, ICA, and NMF. However, if the data arrives in a continuous stream, determining the number of features is a significant challenge and the number may grow with time. In this work, we make feature models significantly more applicable to streaming data by imbuing them with the ability to create new features, online, in a probabilistic and principled manner. To achieve this, we derive a novel recursive form of the Indian Buffet Process, which we term the \textit{Recursive IBP} (R-IBP). We demonstrate that R-IBP can be be used as a prior for feature models to efficiently infer a posterior over an unbounded number of latent features, with quasilinear average time complexity and logarithmic average space complexity. We compare R-IBP to existing offline sampling and variational baselines in two feature models (Linear Gaussian and Factor Analysis) and demonstrate on synthetic and real data that R-IBP achieves comparable or better performance in significantly less time.
Author Information
Rylan Schaeffer (Stanford University)
Yilun Du (MIT)
Gabrielle K Liu (Massachusetts Institute of Technology)
Ila R. Fiete (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Streaming Inference for Infinite Feature Models »
Wed. Jul 20th through Thu the 21st Room Hall E #800
More from the Same Authors
-
2022 : No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit »
Rylan Schaeffer · Mikail Khona · Ila R. Fiete -
2023 : FACADE: A Framework for Adversarial Circuit Anomaly Detection and Evaluation »
Dhruv Pai · Andres Carranza · Rylan Schaeffer · Arnuv Tandon · Sanmi Koyejo -
2023 : Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting »
Rylan Schaeffer · Kateryna Pistunova · Samar Khanna · Sarthak Consul · Sanmi Koyejo -
2023 : Optimizing protein fitness using Bi-level Gibbs sampling with Graph-based Smoothing »
Andrew Kirjner · Jason Yim · Raman Samusevich · Tommi Jaakkola · Regina Barzilay · Ila R. Fiete -
2023 : Are Emergent Abilities of Large Language Models a Mirage? »
Rylan Schaeffer · Brando Miranda · Sanmi Koyejo -
2023 : Optimizing protein fitness using Gibbs sampling with Graph-based Smoothing »
Andrew Kirjner · Jason Yim · Raman Samusevich · Tommi Jaakkola · Regina Barzilay · Ila R. Fiete -
2023 : Deceptive Alignment Monitoring »
Andres Carranza · Dhruv Pai · Rylan Schaeffer · Arnuv Tandon · Sanmi Koyejo -
2023 Poster: Model-agnostic Measure of Generalization Difficulty »
Akhilan Boopathy · Kevin Liu · Jaedong Hwang · Shu Ge · Asaad Mohammedsaleh · Ila R. Fiete -
2023 Poster: Emergence of Sparse Representations from Noise »
Trenton Bricken · Rylan Schaeffer · Bruno Olshausen · Gabriel Kreiman -
2022 Poster: Planning with Diffusion for Flexible Behavior Synthesis »
Michael Janner · Yilun Du · Josh Tenenbaum · Sergey Levine -
2022 Oral: Planning with Diffusion for Flexible Behavior Synthesis »
Michael Janner · Yilun Du · Josh Tenenbaum · Sergey Levine -
2022 Poster: How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective »
Akhilan Boopathy · Ila R. Fiete -
2022 Poster: Learning Iterative Reasoning through Energy Minimization »
Yilun Du · Shuang Li · Josh Tenenbaum · Igor Mordatch -
2022 Poster: Content Addressable Memory Without Catastrophic Forgetting by Heteroassociation with a Fixed Scaffold »
Sugandha Sharma · Sarthak Chandra · Ila R. Fiete -
2022 Spotlight: Content Addressable Memory Without Catastrophic Forgetting by Heteroassociation with a Fixed Scaffold »
Sugandha Sharma · Sarthak Chandra · Ila R. Fiete -
2022 Spotlight: How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective »
Akhilan Boopathy · Ila R. Fiete -
2022 Spotlight: Learning Iterative Reasoning through Energy Minimization »
Yilun Du · Shuang Li · Josh Tenenbaum · Igor Mordatch -
2021 Poster: Improved Contrastive Divergence Training of Energy-Based Models »
Yilun Du · Shuang Li · Josh Tenenbaum · Igor Mordatch -
2021 Spotlight: Improved Contrastive Divergence Training of Energy-Based Models »
Yilun Du · Shuang Li · Josh Tenenbaum · Igor Mordatch -
2019 Poster: Task-Agnostic Dynamics Priors for Deep Reinforcement Learning »
Yilun Du · Karthik Narasimhan -
2019 Oral: Task-Agnostic Dynamics Priors for Deep Reinforcement Learning »
Yilun Du · Karthik Narasimhan