Timezone: »

Function-Space Regularization in Neural Networks: A Probabilistic Perspective
Tim G. J. Rudner · Sanyam Kapoor · Shikai Qiu · Andrew Wilson

Tue Jul 25 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #638
Event URL: https://timrudner.com/function-space-empirical-bayes »

Parameter-space regularization in neural network optimization is a fundamental tool for improving generalization. However, standard parameter-space regularization methods make it challenging to encode explicit preferences about desired predictive functions into neural network training. In this work, we approach regularization in neural networks from a probabilistic perspective and show that by viewing parameter-space regularization as specifying an empirical prior distribution over the model parameters, we can derive a probabilistically well-motivated regularization technique that allows explicitly encoding information about desired predictive functions into neural network training. This method---which we refer to as function-space empirical Bayes (FS-EB)---includes both parameter- and function-space regularization, is mathematically simple, easy to implement, and incurs only minimal computational overhead compared to standard regularization techniques. We evaluate the utility of this regularization technique empirically and demonstrate that the proposed method leads to near-perfect semantic shift detection, highly-calibrated predictive uncertainty estimates, successful task adaption from pre-trained models, and improved generalization under covariate shift.

Author Information

Tim G. J. Rudner (New York University)

I am a PhD Candidate in the Department of Computer Science at the University of Oxford, where I conduct research on probabilistic machine learning with Yarin Gal and Yee Whye Teh. My research interests span **Bayesian deep learning**, **variational inference**, and **reinforcement learning**. I am particularly interested in uncertainty quantification in deep learning, reinforcement learning as probabilistic inference, and probabilistic transfer learning. I am also a **Rhodes Scholar** and an **AI Fellow** at Georgetown University's Center for Security and Emerging Technology.

Sanyam Kapoor (New York University)
Shikai Qiu (New York University)
Andrew Wilson (New York University)

More from the Same Authors