Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

CoSy: Evaluating Textual Explanations of Neurons

Laura Kopf · Philine Bommer · Anna Hedström · Sebastian Lapuschkin · Marina Höhne · Kirill Bykov

Keywords: [ AI Safety ] [ Evaluation of Explainability Methods ] [ Explainable AI ]


Abstract:

A crucial aspect of understanding the complex nature of Deep Neural Networks (DNNs) is the ability to explain learned concepts within their latent representations. While methods exist to connect neurons to human-understandable textual descriptions, evaluating the quality of these explanations is challenging due to the lack of a unified quantitative approach. We introduce CoSy (Concept Synthesis), a novel, architecture-agnostic framework for evaluating textual explanations of latent neurons. Given textual explanations, our proposed framework uses a generative model conditioned on textual input to create data points representing the explanations, comparing the neuron's response to these and control data points to estimate explanation quality. We validate our framework through meta-evaluation experiments and benchmark various concept-based textual explanation methods for Computer Vision tasks, revealing significant differences in quality.

Chat is not available.