Timezone: »
Bayesian methods for adaptive decision-making, such as Bayesian optimisation, active learning, and active search have seen great success in relevant applications. However, real world data collection tasks are more broad and complex, as we may need to achieve a combination of the above goals and/or application specific goals. In such scenarios, specialised methods have limited applicability. In this work, we design a new myopic strategy for a wide class of adaptive design of experiment (DOE) problems, where we wish to collect data in order to fulfil a given goal. Our approach, Myopic Posterior Sampling (MPS), which is inspired by the classical posterior sampling algorithm for multi-armed bandits, enables us to address a broad suite of DOE tasks where a practitioner may incorporate domain expertise about the system and specify her desired goal via a reward function. Empirically, this general-purpose strategy is competitive with more specialised methods in a wide array of synthetic and real world DOE tasks. More importantly, it enables addressing complex DOE goals where no existing method seems applicable. On the theoretical side, we leverage ideas from adaptive submodularity and reinforcement learning to derive conditions under which MPS achieves sublinear regret against natural benchmark policies.
Author Information
Kirthevasan Kandasamy (Carnegie Mellon University)
Willie Neiswanger (CMU)
Reed Zhang (Carnegie Mellon University)
Akshay Krishnamurthy (Microsoft Research)
Jeff Schneider (Uber/CMU)
Barnabás Póczos (CMU)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Myopic Posterior Sampling for Adaptive Goal Oriented Design of Experiments »
Wed Jun 12th 11:25 -- 11:30 PM Room Room 201
More from the Same Authors
-
2020 Workshop: Real World Experiment Design and Active Learning »
Ilija Bogunovic · Willie Neiswanger · Yisong Yue -
2020 Poster: Doubly robust off-policy evaluation with shrinkage »
Yi Su · Maria Dimakopoulou · Akshay Krishnamurthy · Miroslav Dudik -
2020 Poster: Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning »
Dipendra Misra · Mikael Henaff · Akshay Krishnamurthy · John Langford -
2020 Poster: Reward-Free Exploration for Reinforcement Learning »
Chi Jin · Akshay Krishnamurthy · Max Simchowitz · Tiancheng Yu -
2020 Poster: VideoOneNet: Bidirectional Convolutional Recurrent OneNet with Trainable Data Steps for Video Processing »
Zoltán Á. Milacski · Barnabás Póczos · Andras Lorincz -
2020 Poster: Adaptive Estimator Selection for Off-Policy Evaluation »
Yi Su · Pavithra Srinath · Akshay Krishnamurthy -
2020 Poster: Private Reinforcement Learning with PAC and Regret Guarantees »
Giuseppe Vietri · Borja de Balle Pigem · Akshay Krishnamurthy · Steven Wu -
2019 Poster: Provably efficient RL with Rich Observations via Latent State Decoding »
Simon Du · Akshay Krishnamurthy · Nan Jiang · Alekh Agarwal · Miroslav Dudik · John Langford -
2019 Oral: Provably efficient RL with Rich Observations via Latent State Decoding »
Simon Du · Akshay Krishnamurthy · Nan Jiang · Alekh Agarwal · Miroslav Dudik · John Langford -
2018 Poster: Multi-Fidelity Black-Box Optimization with Hierarchical Partitions »
Rajat Sen · kirthevasan kandasamy · Sanjay Shakkottai -
2018 Poster: Semiparametric Contextual Bandits »
Akshay Krishnamurthy · Steven Wu · Vasilis Syrgkanis -
2018 Poster: Transformation Autoregressive Networks »
Junier Oliva · Kumar Avinava Dubey · Manzil Zaheer · Barnabás Póczos · Ruslan Salakhutdinov · Eric Xing · Jeff Schneider -
2018 Oral: Semiparametric Contextual Bandits »
Akshay Krishnamurthy · Steven Wu · Vasilis Syrgkanis -
2018 Oral: Transformation Autoregressive Networks »
Junier Oliva · Kumar Avinava Dubey · Manzil Zaheer · Barnabás Póczos · Ruslan Salakhutdinov · Eric Xing · Jeff Schneider -
2018 Oral: Multi-Fidelity Black-Box Optimization with Hierarchical Partitions »
Rajat Sen · kirthevasan kandasamy · Sanjay Shakkottai -
2018 Poster: Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima »
Simon Du · Jason Lee · Yuandong Tian · Aarti Singh · Barnabás Póczos -
2018 Oral: Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima »
Simon Du · Jason Lee · Yuandong Tian · Aarti Singh · Barnabás Póczos -
2017 Poster: Multi-fidelity Bayesian Optimisation with Continuous Approximations »
kirthevasan kandasamy · Gautam Dasarathy · Barnabás Póczos · Jeff Schneider -
2017 Talk: Multi-fidelity Bayesian Optimisation with Continuous Approximations »
kirthevasan kandasamy · Gautam Dasarathy · Barnabás Póczos · Jeff Schneider -
2017 Poster: Contextual Decision Processes with low Bellman rank are PAC-Learnable »
Nan Jiang · Akshay Krishnamurthy · Alekh Agarwal · John Langford · Robert Schapire -
2017 Poster: The Statistical Recurrent Unit »
Junier Oliva · Barnabás Póczos · Jeff Schneider -
2017 Poster: Nonparanormal Information Estimation »
Shashank Singh · Barnabás Póczos -
2017 Talk: Nonparanormal Information Estimation »
Shashank Singh · Barnabás Póczos -
2017 Talk: Contextual Decision Processes with low Bellman rank are PAC-Learnable »
Nan Jiang · Akshay Krishnamurthy · Alekh Agarwal · John Langford · Robert Schapire -
2017 Talk: The Statistical Recurrent Unit »
Junier Oliva · Barnabás Póczos · Jeff Schneider -
2017 Poster: Post-Inference Prior Swapping »
Willie Neiswanger · Eric Xing -
2017 Poster: Equivariance Through Parameter-Sharing »
Siamak Ravanbakhsh · Jeff Schneider · Barnabás Póczos -
2017 Poster: Active Learning for Cost-Sensitive Classification »
Akshay Krishnamurthy · Alekh Agarwal · Tzu-Kuo Huang · Hal Daumé III · John Langford -
2017 Talk: Active Learning for Cost-Sensitive Classification »
Akshay Krishnamurthy · Alekh Agarwal · Tzu-Kuo Huang · Hal Daumé III · John Langford -
2017 Talk: Equivariance Through Parameter-Sharing »
Siamak Ravanbakhsh · Jeff Schneider · Barnabás Póczos -
2017 Talk: Post-Inference Prior Swapping »
Willie Neiswanger · Eric Xing