Timezone: »
Trees are widely used as interpretable models. However, when they are greedily trained they can yield suboptimal predictive performance. Training soft trees, with probabilistic splits rather than deterministic ones, provides a way to supposedly globally optimize tree models. For interpretability purposes, a hard tree can be obtained from a soft tree by binarizing the probabilistic splits, called hardening. Unfortunately, the good performance of the soft model is often lost after hardening. We systematically study two factors contributing to the performance drop: first, the loss surface of the soft tree loss has many local optima (and thus the logic for using the soft tree loss becomes less clear), and second, the relative values of the soft tree loss do not correspond to relative values of the hard tree loss. We also demonstrate that simple mitigation methods in literature do not fully mitigate the performance drop.
Author Information
Xin Zeng (Harvard University)
Jiayu Yao (Harvard University)
Finale Doshi-Velez (Harvard University)

Finale Doshi-Velez is a Gordon McKay Professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. She completed her MSc from the University of Cambridge as a Marshall Scholar, her PhD from MIT, and her postdoc at Harvard Medical School. Her interests lie at the intersection of machine learning, healthcare, and interpretability. Selected Additional Shinies: BECA recipient, AFOSR YIP and NSF CAREER recipient; Sloan Fellow; IEEE AI Top 10 to Watch
Weiwei Pan (Harvard University)
More from the Same Authors
-
2021 : Promises and Pitfalls of Black-Box Concept Learning Models »
· Anita Mahinpei · Justin Clark · Isaac Lage · Finale Doshi-Velez · Weiwei Pan -
2021 : Prediction-focused Mixture Models »
Abhishek Sharma · Sanjana Narayanan · Catherine Zeng · Finale Doshi-Velez -
2021 : Online structural kernel selection for mobile health »
Eura Shin · Predag Klasnja · Susan Murphy · Finale Doshi-Velez -
2021 : Interpretable learning-to-defer for sequential decision-making »
Shalmali Joshi · Sonali Parbhoo · Finale Doshi-Velez -
2021 : Interpretable learning-to-defer for sequential decision-making »
Shalmali Joshi · Sonali Parbhoo · Finale Doshi-Velez -
2021 : On formalizing causal off-policy sequential decision-making »
Sonali Parbhoo · Shalmali Joshi · Finale Doshi-Velez -
2022 : Leader-based Decision Learning for Cooperative Multi-Agent Reinforcement Learning »
Wenqi Chen · Xin Zeng · Amber Li -
2022 : Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare »
Shengpu Tang · Maggie Makar · Michael Sjoding · Finale Doshi-Velez · Jenna Wiens -
2022 : Leader-based Pre-training Framework for Cooperative Multi-Agent Reinforcement Learning »
Wenqi Chen · Xin Zeng · Amber Li -
2022 : Success of Uncertainty-Aware Deep Models Depends on Data Manifold Geometry »
Mark Penrod · Harrison Termotto · Varshini Reddy · Jiayu Yao · Finale Doshi-Velez · Weiwei Pan -
2023 Poster: The Unintended Consequences of Discount Regularization: Improving Regularization in Certainty Equivalence Reinforcement Learning »
Sarah Rathnam · Sonali Parbhoo · Weiwei Pan · Susan Murphy · Finale Doshi-Velez -
2023 Poster: Mitigating the Effects of Non-Identifiability on Inference for Bayesian Neural Networks with Latent Variables »
Yaniv Yacoby · Weiwei Pan · Finale Doshi-Velez -
2022 : Responsible Decision-Making in Batch RL Settings »
Finale Doshi-Velez -
2021 : RL Explainability & Interpretability Panel »
Ofra Amir · Finale Doshi-Velez · Alan Fern · Zachary Lipton · Omer Gottesman · Niranjani Prasad -
2021 : [01:50 - 02:35 PM UTC] Invited Talk 3: Interpretability in High Dimensions: Concept Bottlenecks and Beyond »
Finale Doshi-Velez -
2021 Poster: Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement »
Andrew Ross · Finale Doshi-Velez -
2021 Oral: Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement »
Andrew Ross · Finale Doshi-Velez -
2021 Poster: State Relevance for Off-Policy Evaluation »
Simon Shen · Yecheng Jason Ma · Omer Gottesman · Finale Doshi-Velez -
2021 Spotlight: State Relevance for Off-Policy Evaluation »
Simon Shen · Yecheng Jason Ma · Omer Gottesman · Finale Doshi-Velez -
2020 : Keynote #2 Finale Doshi-Velez »
Finale Doshi-Velez -
2020 Poster: Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions »
Omer Gottesman · Joseph Futoma · Yao Liu · Sonali Parbhoo · Leo Celi · Emma Brunskill · Finale Doshi-Velez -
2019 : Quality of Uncertainty Quantification for Bayesian Neural Network Inference »
Jiayu Yao -
2019 Poster: Combining parametric and nonparametric models for off-policy evaluation »
Omer Gottesman · Yao Liu · Scott Sussex · Emma Brunskill · Finale Doshi-Velez -
2019 Oral: Combining parametric and nonparametric models for off-policy evaluation »
Omer Gottesman · Yao Liu · Scott Sussex · Emma Brunskill · Finale Doshi-Velez -
2018 Poster: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2018 Poster: Structured Variational Learning of Bayesian Neural Networks with Horseshoe Priors »
Soumya Ghosh · Jiayu Yao · Finale Doshi-Velez -
2018 Oral: Structured Variational Learning of Bayesian Neural Networks with Horseshoe Priors »
Soumya Ghosh · Jiayu Yao · Finale Doshi-Velez -
2018 Oral: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2017 Tutorial: Interpretable Machine Learning »
Been Kim · Finale Doshi-Velez