Timezone: »
We present neural activation coding (NAC) as a novel approach for learning deep representations from unlabeled data for downstream applications. We argue that the deep encoder should maximize its nonlinear expressivity on the data for downstream predictors to take full advantage of its representation power. To this end, NAC maximizes the mutual information between activation patterns of the encoder and the data over a noisy communication channel. We show that learning for a noise-robust activation code increases the number of distinct linear regions of ReLU encoders, hence the maximum nonlinear expressivity. More interestingly, NAC learns both continuous and discrete representations of data, which we respectively evaluate on two downstream tasks: (i) linear classification on CIFAR-10 and ImageNet-1K and (ii) nearest neighbor retrieval on CIFAR-10 and FLICKR-25K. Empirical results show that NAC attains better or comparable performance on both tasks over recent baselines including SimCLR and DistillHash. In addition, NAC pretraining provides significant benefits to the training of deep generative models. Our code is available at https://github.com/yookoon/nac.
Author Information
Yookoon Park (Columbia University)
Sangho Lee (Seoul National University)
Gunhee Kim (Seoul National University)
David Blei (Columbia University)
David Blei is a Professor of Statistics and Computer Science at Columbia University, and a member of the Columbia Data Science Institute. His research is in statistical machine learning, involving probabilistic topic models, Bayesian nonparametric methods, and approximate posterior inference algorithms for massive data. He works on a variety of applications, including text, images, music, social networks, user behavior, and scientific data. David has received several awards for his research, including a Sloan Fellowship (2010), Office of Naval Research Young Investigator Award (2011), Presidential Early Career Award for Scientists and Engineers (2011), Blavatnik Faculty Award (2013), and ACM-Infosys Foundation Award (2013). He is a fellow of the ACM.
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Oral: Unsupervised Representation Learning via Neural Activation Coding »
Fri. Jul 23rd 01:00 -- 01:20 AM Room
More from the Same Authors
-
2022 : Optimization-based Causal Estimation from Heterogenous Environments »
Mingzhang Yin · Yixin Wang · David Blei -
2023 : Causal-structure Driven Augmentations for Text OOD Generalization »
Amir Feder · Yoav Wald · Claudia Shi · Suchi Saria · David Blei -
2023 : Practical and Asymptotically Exact Conditional Sampling in Diffusion Models »
Brian Trippe · Luhuan Wu · Christian Naesseth · David Blei · John Cunningham -
2022 : Reconstructing the Universe with Variational self-Boosted Sampling »
Chirag Modi · Yin Li · David Blei -
2022 Poster: Variational Inference for Infinitely Deep Neural Networks »
Achille Nazaret · David Blei -
2022 Spotlight: Variational Inference for Infinitely Deep Neural Networks »
Achille Nazaret · David Blei -
2021 Poster: A Proxy Variable View of Shared Confounding »
Yixin Wang · David Blei -
2021 Spotlight: A Proxy Variable View of Shared Confounding »
Yixin Wang · David Blei -
2021 Poster: Unsupervised Skill Discovery with Bottleneck Option Learning »
Jaekyeom Kim · Seohong Park · Gunhee Kim -
2021 Spotlight: Unsupervised Skill Discovery with Bottleneck Option Learning »
Jaekyeom Kim · Seohong Park · Gunhee Kim -
2019 Poster: Variational Laplace Autoencoders »
Yookoon Park · Chris Kim · Gunhee Kim -
2019 Oral: Variational Laplace Autoencoders »
Yookoon Park · Chris Kim · Gunhee Kim -
2019 Poster: Curiosity-Bottleneck: Exploration By Distilling Task-Specific Novelty »
Youngjin Kim · Daniel Nam · Hyunwoo Kim · Ji-Hoon Kim · Gunhee Kim -
2019 Oral: Curiosity-Bottleneck: Exploration By Distilling Task-Specific Novelty »
Youngjin Kim · Daniel Nam · Hyunwoo Kim · Ji-Hoon Kim · Gunhee Kim -
2018 Poster: Video Prediction with Appearance and Motion Conditions »
Yunseok Jang · Gunhee Kim · Yale Song -
2018 Poster: Noisin: Unbiased Regularization for Recurrent Neural Networks »
Adji Bousso Dieng · Rajesh Ranganath · Jaan Altosaar · David Blei -
2018 Oral: Noisin: Unbiased Regularization for Recurrent Neural Networks »
Adji Bousso Dieng · Rajesh Ranganath · Jaan Altosaar · David Blei -
2018 Oral: Video Prediction with Appearance and Motion Conditions »
Yunseok Jang · Gunhee Kim · Yale Song -
2018 Poster: Augment and Reduce: Stochastic Inference for Large Categorical Distributions »
Francisco Ruiz · Michalis Titsias · Adji Bousso Dieng · David Blei -
2018 Poster: Black Box FDR »
Wesley Tansey · Yixin Wang · David Blei · Raul Rabadan -
2018 Oral: Augment and Reduce: Stochastic Inference for Large Categorical Distributions »
Francisco Ruiz · Michalis Titsias · Adji Bousso Dieng · David Blei -
2018 Oral: Black Box FDR »
Wesley Tansey · Yixin Wang · David Blei · Raul Rabadan -
2017 Workshop: Implicit Generative Models »
Rajesh Ranganath · Ian Goodfellow · Dustin Tran · David Blei · Balaji Lakshminarayanan · Shakir Mohamed -
2017 Poster: Robust Probabilistic Modeling with Bayesian Data Reweighting »
Yixin Wang · Alp Kucukelbir · David Blei -
2017 Poster: SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization »
Juyong Kim · Yookoon Park · Gunhee Kim · Sung Ju Hwang -
2017 Poster: Evaluating Bayesian Models with Posterior Dispersion Indices »
Alp Kucukelbir · Yixin Wang · David Blei -
2017 Poster: Zero-Inflated Exponential Family Embeddings »
Liping Liu · David Blei -
2017 Talk: Zero-Inflated Exponential Family Embeddings »
Liping Liu · David Blei -
2017 Talk: SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization »
Juyong Kim · Yookoon Park · Gunhee Kim · Sung Ju Hwang -
2017 Talk: Evaluating Bayesian Models with Posterior Dispersion Indices »
Alp Kucukelbir · Yixin Wang · David Blei -
2017 Talk: Robust Probabilistic Modeling with Bayesian Data Reweighting »
Yixin Wang · Alp Kucukelbir · David Blei