Timezone: »
Accelerating the discovery of novel and more effective therapeutics is an important pharmaceutical problem in which deep learning is playing an increasingly significant role. However, real-world drug discovery tasks are often characterized by a scarcity of labeled data and significant covariate shift---a setting that poses a challenge to standard deep learning methods. In this paper, we present Q-SAVI, a probabilistic model able to address these challenges by encoding explicit prior knowledge of the data-generating process into a prior distribution over functions, presenting researchers with a transparent and probabilistically principled way to encode data-driven modeling preferences. Building on a novel, gold-standard bioactivity dataset that facilitates a meaningful comparison of models in an extrapolative regime, we explore different approaches to induce data shift and construct a challenging evaluation setup. We then demonstrate that using Q-SAVI to integrate contextualized prior knowledge of drug-like chemical space into the modeling process affords substantial gains in predictive accuracy and calibration, outperforming a broad range of state-of-the-art self-supervised pre-training and domain adaptation techniques.
Author Information
Leo Klarner (University of Oxford)
Tim G. J. Rudner (New York University)
I am a PhD Candidate in the Department of Computer Science at the University of Oxford, where I conduct research on probabilistic machine learning with Yarin Gal and Yee Whye Teh. My research interests span **Bayesian deep learning**, **variational inference**, and **reinforcement learning**. I am particularly interested in uncertainty quantification in deep learning, reinforcement learning as probabilistic inference, and probabilistic transfer learning. I am also a **Rhodes Scholar** and an **AI Fellow** at Georgetown University's Center for Security and Emerging Technology.
Michael Reutlinger
Torsten Schindler (Google)
Garrett Morris (University of Oxford)
# Biography Professor Morris began as an Oxford undergraduate Chemist, completing his Part II and DPhil with W. Graham Richards in molecular modelling and graphical protein sequence analysis. In 1991 he moved to The Scripps Research Institute, California, to work on developing the protein-ligand dockingsoftware, AutoDock. In 2000 he helped to launch the first biomedical volunteer computing project, FightAIDS@Home, which spawned other biomedical projects on IBM’s World Community Grid. He moved back to the UK in 2008 to work in the Oxford spinout, InhibOx (now Oxford Drug Design), doing ‘real-world’ drug discovery, where he co-supervised SABS CDT students, developed novel virtual screening methods, and spearheaded the use of cloud computing. He later worked at another Oxford spinout, Crysalin, developing novel protein engineering techniques for reliable protein crystallization. He is now an Associate Professor and works closely with Prof Deane in the Department of Statistics in the Oxford Protein Informatics Group (OPIG). Like Charlotte, he is also Programme Co-Director for the EPSRC & MRC SABS CDT (Systems Approaches to Biomedical Science Centre for Doctoral Training), which as renewed in 2019 as the ESPRC & MRC Sustainable Approaches to Biomedical Science: Responsible and Reproducible Research Centre for Doctoral Training, or EPSRC & MRC SABS R3 CDT. In September 2019, he was made Deputy Director of Graduate Studies here in the Department of Statistics. # Research Interests I am interested in the application of statistical methods and software development in computer-aided drug discovery, chiefly in high throughput docking, ligand-based virtual screening, network pharmacology and systems chemical biology, cheminformatics, bioinformatics. I am particularly interested in the applications of machine learning, including active learning, deep learning, and generative AI.
Charlotte Deane (University of Oxford)
Charlotte is Professor of Structural Bioinformatics in the Department of Statistics at the University of Oxford and Chief Scientist of Biologics AI at Exscientia. She is also a co-director of the Systems Approaches to Biomedical Research Centre for Doctoral Training which she founded in 2009. She served on SAGE, the UK Government’s Scientific Advisory Group for Emergencies, during the COVID-19 pandemic, and acted as UK Research and Innovation’s COVID-19 Response Director. She has held numerous senior roles at the University of Oxford and until recently was the Deputy Executive Chair of the UK’s Engineering and Physical Sciences Research Council. At Oxford, Charlotte leads the Oxford Protein Informatics Group (OPIG), who work on diverse problems across immunoinformatics, protein structure and small molecule drug discovery; using statistics, AI and computation to generate biological and medical insight. Her work focuses on the development of novel algorithms, tools and databases that are openly available to the community. These tools are widely used web resources and are also part of several Pharma drug discovery pipelines. Charlotte is on several advisory boards and has consulted extensively with industry. She has set up a consulting arm within her own research group as a way of promoting industrial interaction and use of the group’s software tools.
Yee-Whye Teh (Oxford and DeepMind)
More from the Same Authors
-
2021 : Continual Learning via Function-Space Variational Inference: A Unifying View »
Tim G. J. Rudner · Freddie Bickford Smith · Qixuan Feng · Yee-Whye Teh · Yarin Gal -
2022 : GAUCHE: A Library for Gaussian Processes in Chemistry »
Ryan-Rhys Griffiths · Leo Klarner · Henry Moss · Aditya Ravuri · Sang Truong · Yuanqi Du · Arian Jamasb · Julius Schwartz · Austin Tripp · Bojana Ranković · Philippe Schwaller · Gregory Kell · Anthony Bourached · Alexander Chan · Jacob Moss · Chengzhi Guo · Alpha Lee · Jian Tang -
2022 : Bias in the Benchmark: Systematic experimental errors in bioactivity databases confound multi-task and meta-learning algorithms »
Leo Klarner · Michael Reutlinger · Torsten Schindler · Charlotte Deane · Garrett Morris -
2022 : Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations »
Cong Lu · Philip Ball · Tim G. J Rudner · Jack Parker-Holder · Michael A Osborne · Yee-Whye Teh -
2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · Jie Ren · Joost van Amersfoort · Kehang Han · E. Kelly Buchanan · Kevin Murphy · Mark Collier · Mike Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani -
2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · JIE REN · Joost van Amersfoort · Kehang Han · Estefany Kelly Buchanan · Kevin Murphy · Mark Collier · Michael Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani -
2023 : Synthetic Experience Replay »
Cong Lu · Philip Ball · Yee-Whye Teh · Jack Parker-Holder -
2023 : Protein Design with Guided Discrete Diffusion »
Nate Gruver · Samuel Stanton · Nathan Frey · Tim G. J. Rudner · Isidro Hotzel · Julien Lafrance-Vanasse · Arvind Rajpal · Kyunghyun Cho · Andrew Wilson -
2023 Poster: Modality-Agnostic Variational Compression of Implicit Neural Representations »
Jonathan Richard Schwarz · Jihoon Tack · Yee-Whye Teh · Jaeho Lee · Jinwoo Shin -
2023 Poster: Learning Instance-Specific Augmentations by Capturing Local Invariances »
Ning Miao · Tom Rainforth · Emile Mathieu · Yann Dubois · Yee-Whye Teh · Adam Foster · Hyunjik Kim -
2023 Poster: Function-Space Regularization in Neural Networks: A Probabilistic Perspective »
Tim G. J. Rudner · Sanyam Kapoor · Shikai Qiu · Andrew Wilson -
2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · JIE REN · Joost van Amersfoort · Kehang Han · Estefany Kelly Buchanan · Kevin Murphy · Mark Collier · Michael Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani -
2022 Poster: Continual Learning via Sequential Function-Space Variational Inference »
Tim G. J Rudner · Freddie Bickford Smith · QIXUAN FENG · Yee-Whye Teh · Yarin Gal -
2022 Spotlight: Continual Learning via Sequential Function-Space Variational Inference »
Tim G. J Rudner · Freddie Bickford Smith · QIXUAN FENG · Yee-Whye Teh · Yarin Gal -
2021 : Continual Learning via Function-Space Variational Inference: A Unifying View »
Yarin Gal · Yee-Whye Teh · Qixuan Feng · Freddie Bickford Smith · Tim G. J. Rudner -
2021 Poster: Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes »
Peter Holderrieth · Michael Hutchinson · Yee-Whye Teh -
2021 Spotlight: Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes »
Peter Holderrieth · Michael Hutchinson · Yee-Whye Teh -
2021 Test Of Time: Bayesian Learning via Stochastic Gradient Langevin Dynamics »
Yee Teh · Max Welling -
2021 Poster: On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes »
Tim G. J. Rudner · Oscar Key · Yarin Gal · Tom Rainforth -
2021 Spotlight: On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes »
Tim G. J. Rudner · Oscar Key · Yarin Gal · Tom Rainforth -
2021 Poster: LieTransformer: Equivariant Self-Attention for Lie Groups »
Michael Hutchinson · Charline Le Lan · Sheheryar Zaidi · Emilien Dupont · Yee-Whye Teh · Hyunjik Kim -
2021 Spotlight: LieTransformer: Equivariant Self-Attention for Lie Groups »
Michael Hutchinson · Charline Le Lan · Sheheryar Zaidi · Emilien Dupont · Yee-Whye Teh · Hyunjik Kim -
2020 Poster: Inter-domain Deep Gaussian Processes »
Tim G. J. Rudner · Dino Sejdinovic · Yarin Gal -
2020 Poster: MetaFun: Meta-Learning with Iterative Functional Updates »
Jin Xu · Jean-Francois Ton · Hyunjik Kim · Adam Kosiorek · Yee-Whye Teh -
2020 Poster: Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support »
Yuan Zhou · Hongseok Yang · Yee-Whye Teh · Tom Rainforth -
2020 Poster: Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise »
Umut Simsekli · Lingjiong Zhu · Yee-Whye Teh · Mert Gurbuzbalaban -
2020 Poster: Uncertainty Estimation Using a Single Deep Deterministic Neural Network »
Joost van Amersfoort · Lewis Smith · Yee-Whye Teh · Yarin Gal -
2019 Oral: Hybrid Models with Deep and Invertible Features »
Eric Nalisnick · Akihiro Matsukawa · Yee-Whye Teh · Dilan Gorur · Balaji Lakshminarayanan -
2019 Poster: Disentangling Disentanglement in Variational Autoencoders »
Emile Mathieu · Tom Rainforth · N Siddharth · Yee-Whye Teh -
2019 Poster: Hybrid Models with Deep and Invertible Features »
Eric Nalisnick · Akihiro Matsukawa · Yee-Whye Teh · Dilan Gorur · Balaji Lakshminarayanan -
2019 Oral: Disentangling Disentanglement in Variational Autoencoders »
Emile Mathieu · Tom Rainforth · N Siddharth · Yee-Whye Teh -
2019 Poster: Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks »
Juho Lee · Yoonho Lee · Jungtaek Kim · Adam Kosiorek · Seungjin Choi · Yee-Whye Teh -
2019 Oral: Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks »
Juho Lee · Yoonho Lee · Jungtaek Kim · Adam Kosiorek · Seungjin Choi · Yee-Whye Teh -
2018 Poster: Progress & Compress: A scalable framework for continual learning »
Jonathan Richard Schwarz · Wojciech Czarnecki · Jelena Luketina · Agnieszka Grabska-Barwinska · Yee Teh · Razvan Pascanu · Raia Hadsell -
2018 Poster: Mix & Match - Agent Curricula for Reinforcement Learning »
Wojciech Czarnecki · Siddhant Jayakumar · Max Jaderberg · Leonard Hasenclever · Yee Teh · Nicolas Heess · Simon Osindero · Razvan Pascanu -
2018 Oral: Progress & Compress: A scalable framework for continual learning »
Jonathan Richard Schwarz · Wojciech Czarnecki · Jelena Luketina · Agnieszka Grabska-Barwinska · Yee Teh · Razvan Pascanu · Raia Hadsell -
2018 Oral: Mix & Match - Agent Curricula for Reinforcement Learning »
Wojciech Czarnecki · Siddhant Jayakumar · Max Jaderberg · Leonard Hasenclever · Yee Teh · Nicolas Heess · Simon Osindero · Razvan Pascanu -
2018 Poster: Conditional Neural Processes »
Marta Garnelo · Dan Rosenbaum · Chris Maddison · Tiago Ramalho · David Saxton · Murray Shanahan · Yee Teh · Danilo J. Rezende · S. M. Ali Eslami -
2018 Poster: Tighter Variational Bounds are Not Necessarily Better »
Tom Rainforth · Adam Kosiorek · Tuan Anh Le · Chris Maddison · Maximilian Igl · Frank Wood · Yee-Whye Teh -
2018 Oral: Tighter Variational Bounds are Not Necessarily Better »
Tom Rainforth · Adam Kosiorek · Tuan Anh Le · Chris Maddison · Maximilian Igl · Frank Wood · Yee-Whye Teh -
2018 Oral: Conditional Neural Processes »
Marta Garnelo · Dan Rosenbaum · Chris Maddison · Tiago Ramalho · David Saxton · Murray Shanahan · Yee Teh · Danilo J. Rezende · S. M. Ali Eslami