Timezone: »
Bayesian Neural Networks (BNNs) have recently received increasing attention for their ability to provide well-calibrated posterior uncertainties. However, model selection---even choosing the number of nodes---remains an open question. Recent work has proposed the use of a horseshoe prior over node pre-activations of a Bayesian neural network, which effectively turns off nodes that do not help explain the data. In this work, we propose several modeling and inference advances that consistently improve the compactness of the model learned while maintaining predictive performance, especially in smaller-sample settings including reinforcement learning.
Author Information
Soumya Ghosh (IBM Research)
Jiayu Yao (Harvard University)
Finale Doshi-Velez (Harvard University)

Finale Doshi-Velez is a Gordon McKay Professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. She completed her MSc from the University of Cambridge as a Marshall Scholar, her PhD from MIT, and her postdoc at Harvard Medical School. Her interests lie at the intersection of machine learning, healthcare, and interpretability. Selected Additional Shinies: BECA recipient, AFOSR YIP and NSF CAREER recipient; Sloan Fellow; IEEE AI Top 10 to Watch
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Structured Variational Learning of Bayesian Neural Networks with Horseshoe Priors »
Thu. Jul 12th 03:50 -- 04:00 PM Room A4
More from the Same Authors
-
2021 : Promises and Pitfalls of Black-Box Concept Learning Models »
· Anita Mahinpei · Justin Clark · Isaac Lage · Finale Doshi-Velez · Weiwei Pan -
2021 : Prediction-focused Mixture Models »
Abhishek Sharma · Sanjana Narayanan · Catherine Zeng · Finale Doshi-Velez -
2021 : Online structural kernel selection for mobile health »
Eura Shin · Predag Klasnja · Susan Murphy · Finale Doshi-Velez -
2021 : Interpretable learning-to-defer for sequential decision-making »
Shalmali Joshi · Sonali Parbhoo · Finale Doshi-Velez -
2021 : Interpretable learning-to-defer for sequential decision-making »
Shalmali Joshi · Sonali Parbhoo · Finale Doshi-Velez -
2021 : On formalizing causal off-policy sequential decision-making »
Sonali Parbhoo · Shalmali Joshi · Finale Doshi-Velez -
2022 : Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare »
Shengpu Tang · Maggie Makar · Michael Sjoding · Finale Doshi-Velez · Jenna Wiens -
2022 : From Soft Trees to Hard Trees: Gains and Losses »
Xin Zeng · Jiayu Yao · Finale Doshi-Velez · Weiwei Pan -
2022 : Success of Uncertainty-Aware Deep Models Depends on Data Manifold Geometry »
Mark Penrod · Harrison Termotto · Varshini Reddy · Jiayu Yao · Finale Doshi-Velez · Weiwei Pan -
2023 : Why do universal adversarial attacks work on large language models?: Geometry might be the answer »
Varshini Subhash · Anna Bialas · Siddharth Swaroop · Weiwei Pan · Finale Doshi-Velez -
2023 : Implications of Gaussian process kernel mismatch for out-of-distribution data »
Beau Coker · Finale Doshi-Velez -
2023 : Inverse Transition Learning for Characterizing Near-Optimal Dynamics in Offline Reinforcement Learning »
Leo Benac · Sonali Parbhoo · Finale Doshi-Velez -
2023 : Discovering User Types: Characterization of User Traits by Task-Specific Behaviors in Reinforcement Learning »
Lars L. Ankile · Brian Ham · Kevin Mao · Eura Shin · Siddharth Swaroop · Finale Doshi-Velez · Weiwei Pan -
2023 : Adaptive interventions for both accuracy and time in AI-assisted human decision making »
Siddharth Swaroop · Zana Buçinca · Krzysztof Gajos · Finale Doshi-Velez -
2023 : SAP-sLDA: An Interpretable Interface for Exploring Unstructured Text »
Charumathi Badrinath · Weiwei Pan · Finale Doshi-Velez -
2023 : Signature Activation: A Sparse Signal View for Holistic Saliency »
Jose Tello Ayala · Akl Fahed · Weiwei Pan · Eugene Pomerantsev · Patrick Ellinor · Anthony Philippakis · Finale Doshi-Velez -
2023 : Signature Activation: A Sparse Signal View for Holistic Saliency »
Jose Tello Ayala · Akl Fahed · Weiwei Pan · Eugene Pomerantsev · Patrick Ellinor · Anthony Philippakis · Finale Doshi-Velez -
2023 : Implications of kernel mismatch for OOD data »
Beau Coker · Finale Doshi-Velez -
2023 : Soft prompting might be a bug, not a feature »
Luke Bailey · Gustaf Ahdritz · Anat Kleiman · Siddharth Swaroop · Finale Doshi-Velez · Weiwei Pan -
2023 : Bayesian Inverse Transition Learning for Offline Settings »
Leo Benac · Sonali Parbhoo · Finale Doshi-Velez -
2023 : Discovering User Types: Characterization of User Traits by Task-Specific Behaviors in Reinforcement Learning »
Lars L. Ankile · Brian Ham · Kevin Mao · Eura Shin · Siddharth Swaroop · Finale Doshi-Velez · Weiwei Pan -
2023 : Discovering User Types: Characterization of User Traits by Task-Specific Behaviors in Reinforcement Learning »
Lars L. Ankile · Brian Ham · Kevin Mao · Eura Shin · Siddharth Swaroop · Finale Doshi-Velez · Weiwei Pan -
2023 Poster: The Unintended Consequences of Discount Regularization: Improving Regularization in Certainty Equivalence Reinforcement Learning »
Sarah Rathnam · Sonali Parbhoo · Weiwei Pan · Susan Murphy · Finale Doshi-Velez -
2023 Poster: Mitigating the Effects of Non-Identifiability on Inference for Bayesian Neural Networks with Latent Variables »
Yaniv Yacoby · Weiwei Pan · Finale Doshi-Velez -
2022 : Responsible Decision-Making in Batch RL Settings »
Finale Doshi-Velez -
2021 : RL Explainability & Interpretability Panel »
Ofra Amir · Finale Doshi-Velez · Alan Fern · Zachary Lipton · Omer Gottesman · Niranjani Prasad -
2021 : [01:50 - 02:35 PM UTC] Invited Talk 3: Interpretability in High Dimensions: Concept Bottlenecks and Beyond »
Finale Doshi-Velez -
2021 Poster: Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement »
Andrew Ross · Finale Doshi-Velez -
2021 Oral: Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement »
Andrew Ross · Finale Doshi-Velez -
2021 Poster: State Relevance for Off-Policy Evaluation »
Simon Shen · Yecheng Jason Ma · Omer Gottesman · Finale Doshi-Velez -
2021 Spotlight: State Relevance for Off-Policy Evaluation »
Simon Shen · Yecheng Jason Ma · Omer Gottesman · Finale Doshi-Velez -
2020 : Keynote #2 Finale Doshi-Velez »
Finale Doshi-Velez -
2020 Poster: Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions »
Omer Gottesman · Joseph Futoma · Yao Liu · Sonali Parbhoo · Leo Celi · Emma Brunskill · Finale Doshi-Velez -
2019 : Quality of Uncertainty Quantification for Bayesian Neural Network Inference »
Jiayu Yao -
2019 Poster: Combining parametric and nonparametric models for off-policy evaluation »
Omer Gottesman · Yao Liu · Scott Sussex · Emma Brunskill · Finale Doshi-Velez -
2019 Oral: Combining parametric and nonparametric models for off-policy evaluation »
Omer Gottesman · Yao Liu · Scott Sussex · Emma Brunskill · Finale Doshi-Velez -
2018 Poster: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2018 Oral: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2017 Tutorial: Interpretable Machine Learning »
Been Kim · Finale Doshi-Velez