Timezone: »
As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is little consensus on what interpretable machine learning is and how it should be measured. In this talk, we first suggest a definitions of interpretability and describe when interpretability is needed (and when it is not). Then we will review related work, all the way back from classical AI systems to recent efforts for interpretability in deep learning. Finally, we will talk about a taxonomy for rigorous evaluation, and recommendations for researchers. We will end with discussing open questions and concrete problems for new researchers.
Author Information
Been Kim (Google Brain)
Finale Doshi-Velez (Harvard University)

Finale Doshi-Velez is a Gordon McKay Professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. She completed her MSc from the University of Cambridge as a Marshall Scholar, her PhD from MIT, and her postdoc at Harvard Medical School. Her interests lie at the intersection of machine learning, healthcare, and interpretability. Selected Additional Shinies: BECA recipient, AFOSR YIP and NSF CAREER recipient; Sloan Fellow; IEEE AI Top 10 to Watch
More from the Same Authors
-
2021 : Promises and Pitfalls of Black-Box Concept Learning Models »
· Anita Mahinpei · Justin Clark · Isaac Lage · Finale Doshi-Velez · Weiwei Pan -
2021 : Prediction-focused Mixture Models »
Abhishek Sharma · Sanjana Narayanan · Catherine Zeng · Finale Doshi-Velez -
2021 : Online structural kernel selection for mobile health »
Eura Shin · Predag Klasnja · Susan Murphy · Finale Doshi-Velez -
2021 : Interpretable learning-to-defer for sequential decision-making »
Shalmali Joshi · Sonali Parbhoo · Finale Doshi-Velez -
2021 : Interpretable learning-to-defer for sequential decision-making »
Shalmali Joshi · Sonali Parbhoo · Finale Doshi-Velez -
2021 : On formalizing causal off-policy sequential decision-making »
Sonali Parbhoo · Shalmali Joshi · Finale Doshi-Velez -
2022 : Leveraging Factored Action Spaces for Efficient Offline Reinforcement Learning in Healthcare »
Shengpu Tang · Maggie Makar · Michael Sjoding · Finale Doshi-Velez · Jenna Wiens -
2022 : From Soft Trees to Hard Trees: Gains and Losses »
Xin Zeng · Jiayu Yao · Finale Doshi-Velez · Weiwei Pan -
2022 : Success of Uncertainty-Aware Deep Models Depends on Data Manifold Geometry »
Mark Penrod · Harrison Termotto · Varshini Reddy · Jiayu Yao · Finale Doshi-Velez · Weiwei Pan -
2023 Poster: On the Relationship Between Explanation and Prediction: A Causal View »
Amir-Hossein Karimi · Krikamol Muandet · Simon Kornblith · Bernhard Schölkopf · Been Kim -
2023 Poster: The Unintended Consequences of Discount Regularization: Improving Regularization in Certainty Equivalence Reinforcement Learning »
Sarah Rathnam · Sonali Parbhoo · Weiwei Pan · Susan Murphy · Finale Doshi-Velez -
2023 Poster: Mitigating the Effects of Non-Identifiability on Inference for Bayesian Neural Networks with Latent Variables »
Yaniv Yacoby · Weiwei Pan · Finale Doshi-Velez -
2022 : Responsible Decision-Making in Batch RL Settings »
Finale Doshi-Velez -
2021 : RL Explainability & Interpretability Panel »
Ofra Amir · Finale Doshi-Velez · Alan Fern · Zachary Lipton · Omer Gottesman · Niranjani Prasad -
2021 : [01:50 - 02:35 PM UTC] Invited Talk 3: Interpretability in High Dimensions: Concept Bottlenecks and Beyond »
Finale Doshi-Velez -
2021 Poster: Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement »
Andrew Ross · Finale Doshi-Velez -
2021 Oral: Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement »
Andrew Ross · Finale Doshi-Velez -
2021 Poster: State Relevance for Off-Policy Evaluation »
Simon Shen · Yecheng Jason Ma · Omer Gottesman · Finale Doshi-Velez -
2021 Spotlight: State Relevance for Off-Policy Evaluation »
Simon Shen · Yecheng Jason Ma · Omer Gottesman · Finale Doshi-Velez -
2020 : Keynote #2 Finale Doshi-Velez »
Finale Doshi-Velez -
2020 Poster: Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions »
Omer Gottesman · Joseph Futoma · Yao Liu · Sonali Parbhoo · Leo Celi · Emma Brunskill · Finale Doshi-Velez -
2019 Poster: Combining parametric and nonparametric models for off-policy evaluation »
Omer Gottesman · Yao Liu · Scott Sussex · Emma Brunskill · Finale Doshi-Velez -
2019 Oral: Combining parametric and nonparametric models for off-policy evaluation »
Omer Gottesman · Yao Liu · Scott Sussex · Emma Brunskill · Finale Doshi-Velez -
2018 Poster: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2018 Poster: Structured Variational Learning of Bayesian Neural Networks with Horseshoe Priors »
Soumya Ghosh · Jiayu Yao · Finale Doshi-Velez -
2018 Oral: Structured Variational Learning of Bayesian Neural Networks with Horseshoe Priors »
Soumya Ghosh · Jiayu Yao · Finale Doshi-Velez -
2018 Oral: Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning »
Stefan Depeweg · Jose Miguel Hernandez-Lobato · Finale Doshi-Velez · Steffen Udluft -
2017 Workshop: Workshop on Human Interpretability in Machine Learning (WHI) »
Kush Varshney · Adrian Weller · Been Kim · Dmitry Malioutov