Timezone: »
In safety-critical applications, practitioners are reluctant to trust neural networks when no interpretable explanations are available. Many attempts to provide such explanations revolve around pixel-level attributions or use previously known concepts. In this paper we aim to provide explanations by provably identifying \emph{high-level, previously unknown concepts}. To this end, we propose a probabilistic modeling framework to derive (C)oncept (L)earning and (P)rediction (CLAP) - a VAE-based classifier that uses visually interpretable concepts as linear predictors. Assuming that the data generating mechanism involves interpretable concepts, we prove that our method is able to identify them while attaining optimal classification accuracy. We use synthetic experiments for validation, and also show that on the ChestXRay dataset, CLAP effectively discovers interpretable factors for classifying diseases.
Author Information
Armeen Taeb (Swiss Federal Institute of Technology)
Nicolò Ruggeri (Max-Planck Institute / ETH)
Carina Schnuck (ETHZ - ETH Zurich)
Fanny Yang (ETH Zurich)
More from the Same Authors
-
2021 : Maximizing the robust margin provably overfits on noiseless data »
Fanny Yang · Reinhard Heckel · Michael Aerni · Alexandru Tifrea · Konstantin Donhauser -
2021 : Surprising benefits of ridge regularization for noiseless regression »
Konstantin Donhauser · Alexandru Tifrea · Michael Aerni · Reinhard Heckel · Fanny Yang -
2021 : Novel disease detection using ensembles with regularized disagreement »
Alexandru Tifrea · Eric Stavarache · Fanny Yang -
2022 : Why adversarial training can hurt robust accuracy »
jacob clarysse · Julia Hörrmann · Fanny Yang -
2022 Poster: Fast rates for noisy interpolation require rethinking the effect of inductive bias »
Konstantin Donhauser · Nicolò Ruggeri · Stefan Stojanovic · Fanny Yang -
2022 Spotlight: Fast rates for noisy interpolation require rethinking the effect of inductive bias »
Konstantin Donhauser · Nicolò Ruggeri · Stefan Stojanovic · Fanny Yang -
2021 Poster: How rotational invariance of common kernels prevents generalization in high dimensions »
Konstantin Donhauser · Mingqi Wu · Fanny Yang -
2021 Spotlight: How rotational invariance of common kernels prevents generalization in high dimensions »
Konstantin Donhauser · Mingqi Wu · Fanny Yang -
2020 : QA for invited talk 3 Yang »
Fanny Yang -
2020 : Invited talk 3 Yang »
Fanny Yang -
2020 Poster: Understanding and Mitigating the Tradeoff between Robustness and Accuracy »
Aditi Raghunathan · Sang Michael Xie · Fanny Yang · John Duchi · Percy Liang