Timezone: »

Identifying Interpretable Subspaces in Image Representations
Neha Mukund Kalibhat · Shweta Bhardwaj · C. Bayan Bruss · Hamed Firooz · Maziar Sanjabi · Soheil Feizi

Tue Jul 25 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #705

We propose Automatic Feature Explanation using Contrasting Concepts (FALCON), an interpretability framework to explain features of image representations. For a target feature, FALCON captions its highly activating cropped images using a large captioning dataset (like LAION-400m) and a pre-trained vision-language model like CLIP. Each word among the captions is scored and ranked leading to a small number of shared, human-understandable concepts that closely describe the target feature. FALCON also applies contrastive interpretation using lowly activating (counterfactual) images, to eliminate spurious concepts. Although many existing approaches interpret features independently, we observe in state-of-the-art self-supervised and supervised models, that less than 20% of the representation space can be explained by individual features. We show that features in larger spaces become more interpretable when studied in groups and can be explained with high-order scoring concepts through FALCON. We discuss how extracted concepts can be used to explain and debug failures in downstream tasks. Finally, we present a technique to transfer concepts from one (explainable) representation space to another unseen representation space by learning a simple linear transformation.

Author Information

Neha Mukund Kalibhat (University of Maryland)
Shweta Bhardwaj (University of Maryland College Park)
C. Bayan Bruss (Capital One)
Hamed Firooz (Facebook)
Maziar Sanjabi (Meta AI)
Soheil Feizi (University of Maryland)

More from the Same Authors