Skip to yearly menu bar Skip to main content


Afternoon Poster
in
Workshop: Artificial Intelligence & Human Computer Interaction

CHILLI: A data context-aware perturbation method for XAI

Saif Anwar · Nathan Griffiths · Abhir BHALERAO · Thomas Popham · Mark Bell


Abstract:

The trustworthiness of Machine Learning (ML) models can be difficult to assess, but is critical in high-risk or ethically sensitive applications. Many models are treated as a 'black-box' where the reasoning or criteria for a final decision is opaque to the user. To address this, some existing Explainable AI (XAI) approaches approximate model behaviour using perturbed data. However, such methods have been criticised for ignoring feature dependencies, with explanations being based on potentially unrealistic data. We propose a novel framework, CHILLI, for incorporating data context into XAI by generating contextually aware perturbations, which are faithful to the training data of the base model being explained. This is shown to improve both the soundness and accuracy of the explanations.

Chat is not available.