Workshop: XXAI: Extending Explainable AI Beyond Deep Models and Classifiers
Invited Talk 1: Scott Lundberg - From local explanations to global understanding with trees
Tree-based machine learning models are popular nonlinear predictive models, yet comparatively little attention has been paid to explaining their predictions. In this talk I will explain how to improve their interpretability through the combination of many local game-theoretic explanations. I'll show how combining many high-quality local explanations allows us to represent global structure while retaining local faithfulness to the original model. This will enable us to identify high-magnitude but low-frequency nonlinear mortality risk factors in the US population, to highlight distinct population subgroups with shared risk characteristics, and to identify nonlinear interaction effects among risk factors for chronic kidney disease, and to monitor a machine learning model deployed in a hospital by identifying which features are degrading the model’s performance over time.