Natural-XAI: Explainable AI with Natural Language Explanations

Oana-Maria Camburu · Zeynep Akata


In this tutorial, we will present the emerging direction of explainability that we will refer to as Natural-XAI. Natural-XAI aims to build AI models that (1) learn from natural language explanations for the ground-truth labels at training time, and (2) provide such explanations for their predictions at deployment time. For example, a self-driving car would not only see at training time that it has to stop in a certain environment, but it would additionally be told why this is the case, e.g., “Because the traffic light in front is red.”. At usage time, the self-driving car would also be able to provide such natural language explanations for its actions, thus reassuring the passengers. This direction has recently received increasingly large attention.

Chat is not available.

Mon 8:00 a.m. - 8:05 a.m.
Live Introduction (Live intro)
Mon 8:05 a.m. - 9:10 a.m.
Part One (Talk)   
Oana-Maria Camburu
Mon 9:10 a.m. - 9:25 a.m.
Q&A Part One (Q&A)
Oana-Maria Camburu, Zeynep Akata
Mon 9:25 a.m. - 9:40 a.m.
Mon 9:40 a.m. - 10:45 a.m.
Part Two (Talk)   
Zeynep Akata
Mon 10:45 a.m. - 11:00 a.m.
Q&A Part Two (Q&A)
Oana-Maria Camburu, Zeynep Akata