Timezone: »
Concept-based interpretability addresses a deep neural network's opacity by constructing explanations for its predictions using high-level units of information referred to as concepts. Research in this area, however, has been mainly focused on image and graph-structured data, leaving high-stakes medical and genomic tasks whose data is tabular out of reach of existing methods. In this paper, we address this gap by introducing the first definition of what a high-level concept may entail in tabular data. We use this definition to propose Tabular Concept Bottleneck Models (TabCBMs), a family of interpretable self-explaining neural architectures capable of learning high-level concept explanations for tabular tasks without concept annotations. We evaluate our method in synthetic and real-world tabular tasks and show that it outperforms or performs competitively against state-of-the-art methods while providing a high level of interpretability as measured by its ability to discover known high-level concepts. Finally, we show that TabCBM can discover important high-level concepts in synthetic datasets inspired by critical tabular tasks (e.g., single-cell RNAseq) and allows for human-in-the-loop concept interventions in which an expert can correct mispredicted concepts to boost the model's performance.
Author Information
Mateo Espinosa Zarlenga (University of Cambridge)
Zohreh Shams (Babylon Health)
Michael Nelson (University of Cambridge)
Been Kim (Google Brain)
Mateja Jamnik (University of Cambridge)
More from the Same Authors
-
2023 : Don't trust your eyes: on the (un)reliability of feature visualizations »
Robert Geirhos · Roland S. Zimmermann · Blair Bilodeau · Wieland Brendel · Been Kim -
2023 : ProtoGate: Prototype-based Neural Networks with Local Feature Selection for Tabular Biomedical Data »
Xiangjian Jiang · Andrei Margeloiu · Nikola Simidjievski · Mateja Jamnik -
2023 : Interpretable Neural-Symbolic Concept Reasoning »
Pietro Barbiero · Gabriele Ciravegna · Francesco Giannini · Mateo Espinosa Zarlenga · Lucie Charlotte Magister · Alberto Tonda · Pietro Lió · Frederic Precioso · Mateja Jamnik · Giuseppe Marra -
2023 Poster: On the Relationship Between Explanation and Prediction: A Causal View »
Amir-Hossein Karimi · Krikamol Muandet · Simon Kornblith · Bernhard Schölkopf · Been Kim -
2023 Poster: Interpretable Neural-Symbolic Concept Reasoning »
Pietro Barbiero · Gabriele Ciravegna · Francesco Giannini · Mateo Espinosa Zarlenga · Lucie Charlotte Magister · Alberto Tonda · Pietro Lió · Frederic Precioso · Mateja Jamnik · Giuseppe Marra -
2023 Affinity Workshop: LatinX in AI (LXAI) Workshop »
Laura Montoya · Jose Gallego-Posada · Pablo Rivas · Vinicius Carida · Mateo Espinosa Zarlenga · Carlos Miranda · Andres Marquez · Ramesh Doddaiah · David Alvarez-Melis · Ivan Dario Arraut Guerrero · Mateo Guaman Castro · Ana Maria Quintero-Ossa · Fabian Latorre · Julio Hurtado · Jaime David Acevedo-Viloria · Miguel Felipe Arevalo-Castiblanco -
2017 Workshop: Workshop on Human Interpretability in Machine Learning (WHI) »
Kush Varshney · Adrian Weller · Been Kim · Dmitry Malioutov -
2017 Tutorial: Interpretable Machine Learning »
Been Kim · Finale Doshi-Velez