Timezone: »
Deep neural networks are powerful black-box predictors for modeling conditional distributions of the form p(target|features). While they can be very successful at supervised learning problems where the train and test distributions are the same, they can make overconfident wrong predictions when the test distribution is different. Hybrid models that include both a discriminative conditional model p(target|features) and a generative model p(features) can be more robust under dataset shift, as they can detect covariate shift using the generative model. Current state-of-the-art hybrid models require approximate inference which can be computationally expensive. We propose an hybrid model that defines a generalized linear model on top of deep invertible features (e.g. normalizing flows). An attractive property of our model is that both p(features), the log density, and p(target|features), the predictive distribution, can be computed exactly in a single feed-forward pass. We show that our hybrid model achieves similar predictive accuracy as purely discriminative models on classification and regression tasks, while providing better uncertainty quantification and the ability to detect out-of-distribution inputs. In addition, we also demonstrate that the generative component of the hybrid model can leverage unlabeled data for semi-supervised learning, as well as generate samples which are useful to visualize and interpret the model. The availability of the exact joint density p(target,features) also allows us to compute many quantities readily, making our hybrid model an useful building block for downstream applications of probabilistic deep learning, including but not limited to active learning and domain adaptation.
Author Information
Eric Nalisnick (University of Cambridge & DeepMind)
Akihiro Matsukawa (DeepMind)
Yee-Whye Teh (Oxford and DeepMind)
Dilan Gorur
Balaji Lakshminarayanan (Google DeepMind)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Hybrid Models with Deep and Invertible Features »
Thu. Jun 13th 01:30 -- 04:00 AM Room Pacific Ballroom #268
More from the Same Authors
-
2021 : Continual Learning via Function-Space Variational Inference: A Unifying View »
Tim G. J. Rudner · Freddie Bickford Smith · Qixuan Feng · Yee-Whye Teh · Yarin Gal -
2021 : A simple fix to Mahalanobis distance for improving near-OOD detection »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Precise characterization of the prior predictive distribution of deep ReLU networks »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Repulsive Deep Ensembles are Bayesian »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Calibrated Out-of-Distribution Detection with Conformal P-values »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Are Bayesian neural networks intrinsically good at out-of-distribution detection? »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Provably Robust Detection of Out-of-distribution Data (almost) for free »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Out-of-Distribution Dynamics Detection: RL-Relevant Benchmarks and Results »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Rethinking Assumptions in Deep Anomaly Detection »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Multiple Moment Matching Inference: A Flexible Approximate Inference Algorithm »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : PAC Prediction Sets Under Covariate Shift »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Do We Really Need to Learn Representations from In-domain Data for Outlier Detection? »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : DATE: Detecting Anomalies in Text via Self-Supervision of Transformers »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Implicit Ensemble Training for Efficient and Robust Multiagent Reinforcement Learning »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Failures of Uncertainty Estimation on Out-Of-Distribution Samples: Experimental Results from Medical Applications Lead to Theoretical Insights »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : On Out-of-distribution Detection with Energy-Based Models »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Deterministic Neural Networks with Inductive Biases Capture Epistemic and Aleatoric Uncertainty »
Andreas Kirsch · Balaji Lakshminarayanan · Jasper Snoek -
2021 : Transfer and Marginalize: Explaining Away Label Noise with Privileged Information »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Meta-Calibration: Meta-Learning of Model Calibration Using Differentiable Expected Calibration Error »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Inferring Black Hole Properties from Astronomical Multivariate Time Series with Bayesian Attentive Neural Processes »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Towards improving robustness of compressed CNNs »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : SAND-mask: An Enhanced Gradient Masking Strategy for Invariant Prediction in Domain Generalization »
Soroosh Shahtalebi · Jasper Snoek · Balaji Lakshminarayanan -
2021 : Efficient Gaussian Neural Processes for Regression »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Simple, Attack-Agnostic Defense Against Targeted Training Set Attacks Using Cosine Similarity »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Safety & Exploration: A Comparative Study of Uses of Uncertainty in Reinforcement Learning »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Rethinking Function-Space Variational Inference in Bayesian Neural Networks »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Understanding the Under-Coverage Bias in Uncertainty Estimation »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : BETH Dataset: Real Cybersecurity Data for Anomaly Detection Research »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Mean Embeddings with Test-Time Data Augmentation for Ensembling of Representations »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Deep Ensemble Uncertainty Fails as Network Width Increases: Why, and How to Fix It »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Exact and Efficient Adversarial Robustness with Decomposable Neural Networks »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Consistency Regularization for Training Confidence-Calibrated Classifiers »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Quantization of Bayesian neural networks and its effect on quality of uncertainty »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Class-Distribution-Aware Calibration for Long-Tailed Visual Recognition »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Bayesian Neural Networks with Soft Evidence »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Anomaly Detection for Event Data with Temporal Point Processes »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Batch Inverse-Variance Weighting: Deep Heteroscedastic Regression »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : An Empirical Study of Invariant Risk Minimization on Deep Models »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : A Bayesian Approach to Invariant Deep Neural Networks »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Practical posterior Laplace approximation with optimization-driven second moment estimation »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Variational Generative Flows for Reconstruction Uncertainty Estimation »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Improving the Accuracy-Robustness Trade-Off for Dual-Domain Adversarial Training »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Consistency Regularization Can Improve Robustness to Label Noise »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Neural Variational Gradient Descent »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Evaluating the Use of Reconstruction Error for Novelty Localization »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : The Hidden Uncertainty in a Neural Network’s Activations »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : On the Calibration of Deterministic Epistemic Uncertainty »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Objective Robustness in Deep Reinforcement Learning »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Epistemic Uncertainty in Learning Chaotic Dynamical Systems »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Towards Stochastic Neural Networks via Inductive Wasserstein Embeddings »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Distribution-free uncertainty quantification for classification under label shift »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : How does a Neural Network's Architecture Impact its Robustness to Noisy Labels? »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Top-label calibration »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Learning to Align the Support of Distributions »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Beyond First-Order Uncertainty Estimation with Evidential Models for Open-World Recognition »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Revisiting Out-of-Distribution Detection: A Simple Baseline is Surprisingly Effective »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Contrastive Predictive Coding for Anomaly Detection and Segmentation »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Multi-headed Neural Ensemble Search »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : A variational approximate posterior for the deep Wishart process »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel »
Yao Qin · Jasper Snoek · Balaji Lakshminarayanan -
2021 : On Stein Variational Neural Network Ensembles »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : RouBL: A computationally cheap way to go beyond mean-field variational inference »
Sahar Karimi · Balaji Lakshminarayanan · Jasper Snoek -
2021 : No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Out-of-Distribution Generalization with Deep Equilibrium Models »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Mixture Proportion Estimation and PU Learning: A Modern Approach »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : On The Dark Side Of Calibration For Modern Neural Networks »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Domain Adaptation with Factorizable Joint Shift »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Scaling Laws for the Out-of-Distribution Generalization of Image Classifiers »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Learning Invariant Weights in Neural Networks »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Relational Deep Reinforcement Learning and Latent Goals for Following Instructions in Temporal Logic »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : On the Effectiveness of Mode Exploration in Bayesian Model Averaging for Neural Networks »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Detecting OODs as datapoints with High Uncertainty »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Multi-task Transformation Learning for Robust Out-of-Distribution Detection »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Directly Training Joint Energy-Based Models for Conditional Synthesis and Calibrated Prediction of Multi-Attribute Data »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Deep Learning with Quantified Uncertainty for Free Electron Laser Scientific Facilities »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : On the reversed bias-variance tradeoff in deep ensembles »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Robust Generalization of Quadratic Neural Networks via Function Identification »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Exploring Corruption Robustness: Inductive Biases in Vision Transformers and MLP-Mixers »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Deep Random Projection Outlyingness for Unsupervised Anomaly Detection »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Deep Deterministic Uncertainty for Semantic Segmentation »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Identifying Invariant and Sparse Predictors in High-dimensional Data »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : On Misclassification-Aware Smoothing for Robustness and Uncertainty Calibration »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : On Pitfalls in OoD Detection: Entropy Considered Harmful »
Andreas Kirsch · Jasper Snoek · Balaji Lakshminarayanan -
2021 : PnPOOD : Out-Of-Distribution Detection for Text Classification via Plug andPlay Data Augmentation »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Augmented Invariant Regularization »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Improved Adversarial Robustness via Uncertainty Targeted Attacks »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Notes on the Behavior of MC Dropout »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Distribution-free Risk-controlling Prediction Sets »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Stochastic Bouncy Particle Sampler for Bayesian Neural Networks »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Novelty detection using ensembles with regularized disagreement »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : A Tale Of Two Long Tails »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Defending against Adversarial Patches with Robust Self-Attention »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Intrinsic uncertainties and where to find them »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Dataset to Dataspace: A Topological-Framework to Improve Analysis of Machine Learning Model Performance »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Analyzing And Improving Neural Networks By Generating Semantic Counterexamples Through Differentiable Rendering »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Thinkback: Task-Specific Out-of-Distribution Detection »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Relating Adversarially Robust Generalization to Flat Minima »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Deep Quantile Aggregation »
Balaji Lakshminarayanan · Jasper Snoek -
2022 : Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations »
Cong Lu · Philip Ball · Tim G. J Rudner · Jack Parker-Holder · Michael A Osborne · Yee-Whye Teh -
2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · Jie Ren · Joost van Amersfoort · Kehang Han · E. Kelly Buchanan · Kevin Murphy · Mark Collier · Mike Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani -
2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · JIE REN · Joost van Amersfoort · Kehang Han · Estefany Kelly Buchanan · Kevin Murphy · Mark Collier · Michael Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani -
2023 : Synthetic Experience Replay »
Cong Lu · Philip Ball · Yee-Whye Teh · Jack Parker-Holder -
2023 : Building One-class Detector for Anything: Open-vocabulary Zero-shot OOD Detection Using Text-image Models »
Yunhao Ge · Jie Ren · Jiaping Zhao · Kaifeng Chen · Andrew Gallagher · Laurent Itti · Balaji Lakshminarayanan -
2023 : Morse Neural Networks for Uncertainty Quantification »
Benoit Dherin · Huiyi Hu · JIE REN · Michael Dusenberry · Balaji Lakshminarayanan -
2023 Poster: Modality-Agnostic Variational Compression of Implicit Neural Representations »
Jonathan Richard Schwarz · Jihoon Tack · Yee-Whye Teh · Jaeho Lee · Jinwoo Shin -
2023 Poster: Learning Instance-Specific Augmentations by Capturing Local Invariances »
Ning Miao · Tom Rainforth · Emile Mathieu · Yann Dubois · Yee-Whye Teh · Adam Foster · Hyunjik Kim -
2023 Poster: Drug Discovery under Covariate Shift with Domain-Informed Prior Distributions over Functions »
Leo Klarner · Tim G. J. Rudner · Michael Reutlinger · Torsten Schindler · Garrett Morris · Charlotte Deane · Yee-Whye Teh -
2023 Poster: A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models »
James Allingham · JIE REN · Michael Dusenberry · Xiuye Gu · Yin Cui · Dustin Tran · Jeremiah Liu · Balaji Lakshminarayanan -
2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · JIE REN · Joost van Amersfoort · Kehang Han · Estefany Kelly Buchanan · Kevin Murphy · Mark Collier · Michael Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani -
2022 Poster: Wide Neural Networks Forget Less Catastrophically »
Seyed Iman Mirzadeh · Arslan Chaudhry · Dong Yin · Huiyi Hu · Razvan Pascanu · Dilan Gorur · Mehrdad Farajtabar -
2022 Poster: Continual Learning via Sequential Function-Space Variational Inference »
Tim G. J Rudner · Freddie Bickford Smith · QIXUAN FENG · Yee-Whye Teh · Yarin Gal -
2022 Spotlight: Wide Neural Networks Forget Less Catastrophically »
Seyed Iman Mirzadeh · Arslan Chaudhry · Dong Yin · Huiyi Hu · Razvan Pascanu · Dilan Gorur · Mehrdad Farajtabar -
2022 Spotlight: Continual Learning via Sequential Function-Space Variational Inference »
Tim G. J Rudner · Freddie Bickford Smith · QIXUAN FENG · Yee-Whye Teh · Yarin Gal -
2021 : Continual Learning via Function-Space Variational Inference: A Unifying View »
Yarin Gal · Yee-Whye Teh · Qixuan Feng · Freddie Bickford Smith · Tim G. J. Rudner -
2021 Workshop: Uncertainty and Robustness in Deep Learning »
Balaji Lakshminarayanan · Dan Hendrycks · Sharon Li · Jasper Snoek · Silvia Chiappa · Sebastian Nowozin · Thomas Dietterich -
2021 : Welcome »
Balaji Lakshminarayanan -
2021 Poster: Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes »
Peter Holderrieth · Michael Hutchinson · Yee-Whye Teh -
2021 Spotlight: Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes »
Peter Holderrieth · Michael Hutchinson · Yee-Whye Teh -
2021 Test Of Time: Bayesian Learning via Stochastic Gradient Langevin Dynamics »
Yee Teh · Max Welling -
2021 Poster: LieTransformer: Equivariant Self-Attention for Lie Groups »
Michael Hutchinson · Charline Le Lan · Sheheryar Zaidi · Emilien Dupont · Yee-Whye Teh · Hyunjik Kim -
2021 Spotlight: LieTransformer: Equivariant Self-Attention for Lie Groups »
Michael Hutchinson · Charline Le Lan · Sheheryar Zaidi · Emilien Dupont · Yee-Whye Teh · Hyunjik Kim -
2020 : Invited talk 2: Detecting Distribution Shift with Deep Generative Models »
Eric Nalisnick -
2020 Workshop: Uncertainty and Robustness in Deep Learning Workshop (UDL) »
Sharon Yixuan Li · Balaji Lakshminarayanan · Dan Hendrycks · Thomas Dietterich · Jasper Snoek -
2020 Poster: MetaFun: Meta-Learning with Iterative Functional Updates »
Jin Xu · Jean-Francois Ton · Hyunjik Kim · Adam Kosiorek · Yee-Whye Teh -
2020 Poster: Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support »
Yuan Zhou · Hongseok Yang · Yee-Whye Teh · Tom Rainforth -
2020 Poster: Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors »
Mike Dusenberry · Ghassen Jerfel · Yeming Wen · Yian Ma · Jasper Snoek · Katherine Heller · Balaji Lakshminarayanan · Dustin Tran -
2020 Poster: Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise »
Umut Simsekli · Lingjiong Zhu · Yee-Whye Teh · Mert Gurbuzbalaban -
2020 Poster: Uncertainty Estimation Using a Single Deep Deterministic Neural Network »
Joost van Amersfoort · Lewis Smith · Yee-Whye Teh · Yarin Gal -
2019 Workshop: Uncertainty and Robustness in Deep Learning »
Sharon Yixuan Li · Dan Hendrycks · Thomas Dietterich · Balaji Lakshminarayanan · Justin Gilmer -
2019 Poster: Dropout as a Structured Shrinkage Prior »
Eric Nalisnick · Jose Miguel Hernandez-Lobato · Padhraic Smyth -
2019 Poster: Learning from Delayed Outcomes via Proxies with Applications to Recommender Systems »
Timothy Mann · Sven Gowal · Andras Gyorgy · Huiyi Hu · Ray Jiang · Balaji Lakshminarayanan · Prav Srinivasan -
2019 Oral: Dropout as a Structured Shrinkage Prior »
Eric Nalisnick · Jose Miguel Hernandez-Lobato · Padhraic Smyth -
2019 Oral: Learning from Delayed Outcomes via Proxies with Applications to Recommender Systems »
Timothy Mann · Sven Gowal · Andras Gyorgy · Huiyi Hu · Ray Jiang · Balaji Lakshminarayanan · Prav Srinivasan -
2019 Poster: Disentangling Disentanglement in Variational Autoencoders »
Emile Mathieu · Tom Rainforth · N Siddharth · Yee-Whye Teh -
2019 Oral: Disentangling Disentanglement in Variational Autoencoders »
Emile Mathieu · Tom Rainforth · N Siddharth · Yee-Whye Teh -
2019 Poster: Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks »
Juho Lee · Yoonho Lee · Jungtaek Kim · Adam Kosiorek · Seungjin Choi · Yee-Whye Teh -
2019 Oral: Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks »
Juho Lee · Yoonho Lee · Jungtaek Kim · Adam Kosiorek · Seungjin Choi · Yee-Whye Teh -
2018 Poster: Progress & Compress: A scalable framework for continual learning »
Jonathan Richard Schwarz · Wojciech Czarnecki · Jelena Luketina · Agnieszka Grabska-Barwinska · Yee Teh · Razvan Pascanu · Raia Hadsell -
2018 Poster: Mix & Match - Agent Curricula for Reinforcement Learning »
Wojciech Czarnecki · Siddhant Jayakumar · Max Jaderberg · Leonard Hasenclever · Yee Teh · Nicolas Heess · Simon Osindero · Razvan Pascanu -
2018 Oral: Progress & Compress: A scalable framework for continual learning »
Jonathan Richard Schwarz · Wojciech Czarnecki · Jelena Luketina · Agnieszka Grabska-Barwinska · Yee Teh · Razvan Pascanu · Raia Hadsell -
2018 Oral: Mix & Match - Agent Curricula for Reinforcement Learning »
Wojciech Czarnecki · Siddhant Jayakumar · Max Jaderberg · Leonard Hasenclever · Yee Teh · Nicolas Heess · Simon Osindero · Razvan Pascanu -
2018 Poster: Conditional Neural Processes »
Marta Garnelo · Dan Rosenbaum · Chris Maddison · Tiago Ramalho · David Saxton · Murray Shanahan · Yee Teh · Danilo J. Rezende · S. M. Ali Eslami -
2018 Poster: Tighter Variational Bounds are Not Necessarily Better »
Tom Rainforth · Adam Kosiorek · Tuan Anh Le · Chris Maddison · Maximilian Igl · Frank Wood · Yee-Whye Teh -
2018 Oral: Tighter Variational Bounds are Not Necessarily Better »
Tom Rainforth · Adam Kosiorek · Tuan Anh Le · Chris Maddison · Maximilian Igl · Frank Wood · Yee-Whye Teh -
2018 Oral: Conditional Neural Processes »
Marta Garnelo · Dan Rosenbaum · Chris Maddison · Tiago Ramalho · David Saxton · Murray Shanahan · Yee Teh · Danilo J. Rezende · S. M. Ali Eslami -
2017 Workshop: Implicit Generative Models »
Rajesh Ranganath · Ian Goodfellow · Dustin Tran · David Blei · Balaji Lakshminarayanan · Shakir Mohamed