Timezone: »
Poster
Active Learning for Decision-Making from Imbalanced Observational Data
Iiris Sundin · Peter Schulam · Eero Siivola · Aki Vehtari · Suchi Saria · Samuel Kaski
Machine learning can help personalized decision support by learning models to predict individual treatment effects (ITE). This work studies the reliability of prediction-based decision-making in a task of deciding which action $a$ to take for a target unit after observing its covariates $\tilde{x}$ and predicted outcomes $\hat{p}(\tilde{y} \mid \tilde{x}, a)$. An example case is personalized medicine and the decision of which treatment to give to a patient. A common problem when learning these models from observational data is imbalance, that is, difference in treated/control covariate distributions, which is known to increase the upper bound of the expected ITE estimation error. We propose to assess the decision-making reliability by estimating the ITE model's Type S error rate, which is the probability of the model inferring the sign of the treatment effect wrong. Furthermore, we use the estimated reliability as a criterion for active learning, in order to collect new (possibly expensive) observations, instead of making a forced choice based on unreliable predictions. We demonstrate the effectiveness of this decision-making aware active learning in two decision-making tasks: in simulated data with binary outcomes and in a medical dataset with synthetic and continuous treatment outcomes.
Author Information
Iiris Sundin (Aalto University)
Peter Schulam (Johns Hopkins University)
Eero Siivola (Aalto University)
Aki Vehtari (Aalto University)
Suchi Saria (Johns Hopkins University)
Samuel Kaski (Aalto University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Active Learning for Decision-Making from Imbalanced Observational Data »
Wed. Jun 12th 12:15 -- 12:20 AM Room Room 101
More from the Same Authors
-
2023 : Augmenting Bayesian Optimization with Preference-based Expert Feedback »
Daolang Huang · Louis Filstroff · Petrus Mikkola · Runkai Zheng · Milica Todorovic · Samuel Kaski -
2023 : Bayesian Active Meta-Learning under Prior Misspecification »
Sabina Sloman · Ayush Bharti · Samuel Kaski -
2023 Poster: Optimally-weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inference »
Ayush Bharti · Masha Naslidnyk · Oscar Key · Samuel Kaski · Francois-Xavier Briol -
2022 Poster: Approximate Bayesian Computation with Domain Expert in the Loop »
Ayush Bharti · Louis Filstroff · Samuel Kaski -
2022 Spotlight: Approximate Bayesian Computation with Domain Expert in the Loop »
Ayush Bharti · Louis Filstroff · Samuel Kaski -
2022 Poster: Tackling covariate shift with node-based Bayesian neural networks »
Trung Trinh · Markus Heinonen · Luigi Acerbi · Samuel Kaski -
2022 Oral: Tackling covariate shift with node-based Bayesian neural networks »
Trung Trinh · Markus Heinonen · Luigi Acerbi · Samuel Kaski -
2021 Poster: Differentially Private Bayesian Inference for Generalized Linear Models »
Tejas Kulkarni · Joonas Jälkö · Antti Koskela · Samuel Kaski · Antti Honkela -
2021 Spotlight: Differentially Private Bayesian Inference for Generalized Linear Models »
Tejas Kulkarni · Joonas Jälkö · Antti Koskela · Samuel Kaski · Antti Honkela -
2020 Poster: Projective Preferential Bayesian Optimization »
Petrus Mikkola · Milica Todorović · Jari Järvi · Patrick Rinke · Samuel Kaski -
2019 : Suchi Saria (John Hopkins) - Link between Causal Inference and Reinforcement Learning and Applications to Learning from Offline/Observational Data »
Suchi Saria -
2019 : Keynote by Suchi Saria: Safety Challenges with Black-Box Predictors and Novel Learning Approaches for Failure Proofing »
Suchi Saria -
2019 : Networking Lunch (provided) + Poster Session »
Abraham Stanway · Alex Robson · Aneesh Rangnekar · Ashesh Chattopadhyay · Ashley Pilipiszyn · Benjamin LeRoy · Bolong Cheng · Ce Zhang · Chaopeng Shen · Christian Schroeder · Christian Clough · Clement DUHART · Clement Fung · Cozmin Ududec · Dali Wang · David Dao · di wu · Dimitrios Giannakis · Dino Sejdinovic · Doina Precup · Duncan Watson-Parris · Gege Wen · George Chen · Gopal Erinjippurath · Haifeng Li · Han Zou · Herke van Hoof · Hillary A Scannell · Hiroshi Mamitsuka · Hongbao Zhang · Jaegul Choo · James Wang · James Requeima · Jessica Hwang · Jinfan Xu · Johan Mathe · Jonathan Binas · Joonseok Lee · Kalai Ramea · Kate Duffy · Kevin McCloskey · Kris Sankaran · Lester Mackey · Letif Mones · Loubna Benabbou · Lynn Kaack · Matthew Hoffman · Mayur Mudigonda · Mehrdad Mahdavi · Michael McCourt · Mingchao Jiang · Mohammad Mahdi Kamani · Neel Guha · Niccolo Dalmasso · Nick Pawlowski · Nikola Milojevic-Dupont · Paulo Orenstein · Pedram Hassanzadeh · Pekka Marttinen · Ramesh Nair · Sadegh Farhang · Samuel Kaski · Sandeep Manjanna · Sasha Luccioni · Shuby Deshpande · Soo Kim · Soukayna Mouatadid · Sunghyun Park · Tao Lin · Telmo Felgueira · Thomas Hornigold · Tianle Yuan · Tom Beucler · Tracy Cui · Volodymyr Kuleshov · Wei Yu · yang song · Ydo Wexler · Yoshua Bengio · Zhecheng Wang · Zhuangfang Yi · Zouheir Malki -
2019 Poster: Learning Models from Data with Measurement Error: Tackling Underreporting »
Roy Adams · Yuelong Ji · Xiaobin Wang · Suchi Saria -
2019 Oral: Learning Models from Data with Measurement Error: Tackling Underreporting »
Roy Adams · Yuelong Ji · Xiaobin Wang · Suchi Saria -
2019 Poster: Bayesian leave-one-out cross-validation for large data »
Måns Magnusson · Michael Andersen · Johan Jonasson · Aki Vehtari -
2019 Oral: Bayesian leave-one-out cross-validation for large data »
Måns Magnusson · Michael Andersen · Johan Jonasson · Aki Vehtari -
2018 Poster: Yes, but Did It Work?: Evaluating Variational Inference »
Yuling Yao · Aki Vehtari · Daniel Simpson · Andrew Gelman -
2018 Oral: Yes, but Did It Work?: Evaluating Variational Inference »
Yuling Yao · Aki Vehtari · Daniel Simpson · Andrew Gelman -
2017 Workshop: Private and Secure Machine Learning »
Antti Honkela · Kana Shimizu · Samuel Kaski