Timezone: »
We introduce Deep Adaptive Design (DAD), a method for amortizing the cost of adaptive Bayesian experimental design that allows experiments to be run in real-time. Traditional sequential Bayesian optimal experimental design approaches require substantial computation at each stage of the experiment. This makes them unsuitable for most real-world applications, where decisions must typically be made quickly. DAD addresses this restriction by learning an amortized design network upfront and then using this to rapidly run (multiple) adaptive experiments at deployment time. This network represents a design policy which takes as input the data from previous steps, and outputs the next design using a single forward pass; these design decisions can be made in milliseconds during the live experiment. To train the network, we introduce contrastive information bounds that are suitable objectives for the sequential setting, and propose a customized network architecture that exploits key symmetries. We demonstrate that DAD successfully amortizes the process of experimental design, outperforming alternative strategies on a number of problems.
Author Information
Adam Foster (University of Oxford)
Desi Ivanova (University of Oxford)
ILYAS MALIK (Amazon)
Tom Rainforth (University of Oxford)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design »
Thu. Jul 22nd 04:00 -- 06:00 PM Room Virtual
More from the Same Authors
-
2021 : Active Learning under Pool Set Distribution Shift and Noisy Data »
Andreas Kirsch · Tom Rainforth · Yarin Gal -
2021 : Active Learning under Pool Set Distribution Shift and Noisy Data »
Andreas Kirsch · Tom Rainforth · Yarin Gal -
2022 : [Poster] Automated Adaptive Design in Real Time »
Desi Ivanova -
2023 Poster: Differentiable Multi-Target Causal Bayesian Experimental Design »
Panagiotis Tigas · Yashas Annadani · Desi Ivanova · Andrew Jesson · Yarin Gal · Adam Foster · Stefan Bauer -
2023 Poster: Learning Instance-Specific Augmentations by Capturing Local Invariances »
Ning Miao · Tom Rainforth · Emile Mathieu · Yann Dubois · Yee-Whye Teh · Adam Foster · Hyunjik Kim -
2023 Poster: CO-BED: Information-Theoretic Contextual Optimization via Bayesian Experimental Design »
Desi Ivanova · Joel Jennings · Tom Rainforth · Cheng Zhang · Adam Foster -
2022 : Poster Session 2 »
Asra Aslam · Sowmya Vijayakumar · Heta Gandhi · Mary Adewunmi · You Cheng · Tong Yang · Kristina Ulicna · · Weiwei Zong · Narmada Naik · Akshata Tiwari · Ambreen Hamadani · Mayuree Binjolkar · Charupriya Sharma · Chhavi Yadav · Yu Yang · Winnie Xu · QINGQING ZHAO · Julissa Giuliana Villanueva Llerena · Lilian Mkonyi · Berthine Nyunga Mpinda · Rehema Mwawado · Tooba Imtiaz · Desi Ivanova · Emma Johanna Mikaela Petersson Svensson · Angela Bitto-Nemling · Elisabeth Rumetshofer · Ana Sanchez Fernandez · Garima Giri · Sigrid Passano Hellan · Catherine Ordun · Vasiliki Tassopoulou · Gina Wong -
2022 : Poster Session 1 »
Asra Aslam · Sowmya Vijayakumar · Heta Gandhi · Mary Adewunmi · You Cheng · Tong Yang · Kristina Ulicna · · Weiwei Zong · Narmada Naik · Akshata Tiwari · Ambreen Hamadani · Mayuree Binjolkar · Charupriya Sharma · Chhavi Yadav · Yu Yang · Winnie Xu · QINGQING ZHAO · Julissa Giuliana Villanueva Llerena · Lilian Mkonyi · Berthine Nyunga Mpinda · Rehema Mwawado · Tooba Imtiaz · Desi Ivanova · Emma Johanna Mikaela Petersson Svensson · Angela Bitto-Nemling · Elisabeth Rumetshofer · Ana Sanchez Fernandez · Garima Giri · Sigrid Passano Hellan · Catherine Ordun · Vasiliki Tassopoulou · Gina Wong -
2021 : Active Learning under Pool Set Distribution Shift and Noisy Data »
Yarin Gal · Tom Rainforth · Andreas Kirsch -
2021 Poster: Active Testing: Sample-Efficient Model Evaluation »
Jannik Kossen · Sebastian Farquhar · Yarin Gal · Tom Rainforth -
2021 Poster: On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes »
Tim G. J. Rudner · Oscar Key · Yarin Gal · Tom Rainforth -
2021 Spotlight: Active Testing: Sample-Efficient Model Evaluation »
Jannik Kossen · Sebastian Farquhar · Yarin Gal · Tom Rainforth -
2021 Spotlight: On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes »
Tim G. J. Rudner · Oscar Key · Yarin Gal · Tom Rainforth -
2021 Poster: Probabilistic Programs with Stochastic Conditioning »
David Tolpin · Yuan Zhou · Tom Rainforth · Hongseok Yang -
2021 Spotlight: Probabilistic Programs with Stochastic Conditioning »
David Tolpin · Yuan Zhou · Tom Rainforth · Hongseok Yang -
2020 : "Designing Bayesian-Optimal Experiments with Stochastic Gradients" »
Tom Rainforth -
2020 Poster: Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support »
Yuan Zhou · Hongseok Yang · Yee-Whye Teh · Tom Rainforth -
2019 Poster: Disentangling Disentanglement in Variational Autoencoders »
Emile Mathieu · Tom Rainforth · N Siddharth · Yee-Whye Teh -
2019 Oral: Disentangling Disentanglement in Variational Autoencoders »
Emile Mathieu · Tom Rainforth · N Siddharth · Yee-Whye Teh -
2019 Poster: Amortized Monte Carlo Integration »
Adam Golinski · Frank Wood · Tom Rainforth -
2019 Oral: Amortized Monte Carlo Integration »
Adam Golinski · Frank Wood · Tom Rainforth -
2018 Poster: On Nesting Monte Carlo Estimators »
Tom Rainforth · Rob Cornish · Hongseok Yang · andrew warrington · Frank Wood -
2018 Oral: On Nesting Monte Carlo Estimators »
Tom Rainforth · Rob Cornish · Hongseok Yang · andrew warrington · Frank Wood -
2018 Poster: Tighter Variational Bounds are Not Necessarily Better »
Tom Rainforth · Adam Kosiorek · Tuan Anh Le · Chris Maddison · Maximilian Igl · Frank Wood · Yee-Whye Teh -
2018 Oral: Tighter Variational Bounds are Not Necessarily Better »
Tom Rainforth · Adam Kosiorek · Tuan Anh Le · Chris Maddison · Maximilian Igl · Frank Wood · Yee-Whye Teh