Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Interpretable Machine Learning in Healthcare

Tree-based local explanations of machine learning model predictions – AraucanaXAI

Enea Parimbelli · Giovanna Nicora · Szymon Wilk · Wojtek Michalowski · Riccardo Bellazzi


Abstract:

Increasingly complex learning methods such as boosting, bagging and deep learning have made ML models more accurate, but harder to understand and interpret. A tradeoff between performance and intelligibility is often to be faced, especially in high-stakes applications like medicine. In the present article we propose a novel methodological approach for generating explanations of the predictions of a generic ML model, given a specific instance for which the prediction has been made, that can tackle both classification and regression tasks. Advantages of the proposed XAI approach include improved fidelity to the original model, ability to deal with non-linear decision boundaries, and native support to both classification and regression problems.

Chat is not available.