Skip to yearly menu bar Skip to main content


Poster

FastCAV: Efficient Computation of Concept Activation Vectors for Explaining Deep Neural Networks

Laines Schmalwasser · Niklas Penzel · Joachim Denzler · Julia Niebling

East Exhibition Hall A-B #E-2102
[ ] [ ] [ Project Page ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Concepts such as objects, patterns, and shapes are how humans understand the world. Building on this intuition, concept-based explainability methods aim to study representations learned by deep neural networks in relation to human-understandable concepts. Here, Concept Activation Vectors (CAVs) are an important tool and can identify whether a model learned a concept or not. However, the computational cost and time requirements of existing CAV computation pose a significant challenge, particularly in large-scale, high-dimensional architectures. To address this limitation, we introduce FastCAV, a novel approach that accelerates the extraction of CAVs by up to 63.6× (on average 46.4×). We provide a theoretical foundation for our approach and give concrete assumptions under which it is equivalent to established SVM-based methods. Our empirical results demonstrate that CAVs calculated with FastCAV maintain similar performance while being more efficient and stable. In downstream applications, i.e., concept-based explanation methods, we show that FastCAV can act as a replacement leading to equivalent insights. Hence, our approach enables previously infeasible investigations of deep models, which we demonstrate by tracking the evolution of concepts during model training.

Lay Summary:

To understand how complex AI systems learn, researchers try to see if they recognize human-understandable concepts (like "stripes" or "zigzagged"). A method called Concept Activation Vectors (CAVs) is used to identify if a model has learned such concepts. However, calculating CAVs for modern, large AI models is often too slow and computationally expensive, limiting their practical use.We introduce FastCAV, a new approach to compute these CAVs much more quickly, on average 46 times faster. We provide theoretical support and demonstrate through experiments that FastCAV produces results of similar quality to established approaches, but with significantly improved efficiency and stability.This significant speed-up makes concept-based explanations more practical for researchers. FastCAV enables investigations that were previously too costly or time-consuming, such as tracking how an AI develops an understanding of different concepts throughout its training process. This allows for a deeper understanding of how complex AI models function.

Live content is unavailable. Log in and register to view live content