The proposed workshop pays a special interests in theoretic foundations, limitations, and new application trends in the scope of XAI. These issues reflect new bottlenecks in the future development of XAI, for example: (1) no theoretic definition of XAI and no solid and widely-used formulation for even a specific explanation task. (2) No sophisticated formulation of the essence of ``semantics'' encoded in a DNN. (3) How to bridge the gap between connectionism and symbolism in AI research has not been sophisticatedly explored. (4) How to evaluate the correctness and trustworthiness of an explanation result is still an open problem. (5) How to bridge the intuitive explanation (e.g., the attribution/importance-based explanation) and a DNN's representation capacity (e.g., the generalization power) is still a significant challenge. (6) Using the explanation to guide the architecture design or substantially boost the performance of a DNN is a bottleneck. Therefore, this workshop aims to bring together researchers, engineers as well as industrial practitioners, who concern about the interpretability, safety, and reliability of artificial intelligence. In this workshop, we hope to use a broad discussion on the above bottleneck issues to explore new critical and constructive views of the future development of XAI. Research outcomes are also expected to profoundly influences critical industrial applications such as medical diagnosis, finance, and autonomous driving.
Accepted papers: https://arxiv.org/html/2107.08821
Fri 4:59 a.m. - 5:00 a.m.
|
Accepted papers (Proceeding) link » | 🔗 |
Fri 5:00 a.m. - 5:02 a.m.
|
[12:00 - 12:02 PM UTC] Welcome
(Demonstration)
|
Quanshi Zhang 🔗 |
Fri 5:02 a.m. - 5:47 a.m.
|
[12:02 - 12:47 PM UTC] Invited Talk 1: Explainable AI: How Machines Gain Justified Trust from Humans
(Talk)
SlidesLive Video » |
Song-Chun Zhu 🔗 |
Fri 5:47 a.m. - 5:52 a.m.
|
[12:47 - 12:52 PM UTC] Q&A
(Q&A)
|
🔗 |
Fri 5:52 a.m. - 6:45 a.m.
|
[12:52 - 01:45 PM UTC] Invited Talk 2: Toward Explainable AI
(Talk)
SlidesLive Video » |
Klaus-robert Mueller · Wojciech Samek · Gregoire Montavon 🔗 |
Fri 6:45 a.m. - 6:50 a.m.
|
[01:45 - 01:50 PM UTC] Q&A
(Q&A)
|
🔗 |
Fri 6:50 a.m. - 7:35 a.m.
|
[01:50 - 02:35 PM UTC] Invited Talk 3: Interpretability in High Dimensions: Concept Bottlenecks and Beyond
(Talk)
SlidesLive Video » |
Finale Doshi-Velez 🔗 |
Fri 7:35 a.m. - 7:40 a.m.
|
[02:35 - 02:40 PM UTC] Q&A
(Q&A)
|
🔗 |
Fri 7:40 a.m. - 9:58 a.m.
|
[02:40 - 04:58 PM UTC] Poster Session 1
(Poster Session)
https://eventhosts.gather.town/n75uCc9V3B7JErT7/xai-poster-room-1 https://eventhosts.gather.town/6aHB0V4pPAulNwQz/xai-poster-room-2 |
🔗 |
Fri 9:58 a.m. - 10:00 a.m.
|
A short introduction of speakers
(Demonstration)
|
🔗 |
Fri 10:00 a.m. - 10:45 a.m.
|
[05:00 - 05:45 PM UTC] Invited Talk 4: Analysis Not Explainability
(Talk)
SlidesLive Video » |
Mukund Sundararajan 🔗 |
Fri 10:45 a.m. - 10:50 a.m.
|
[05:45 - 05:50 PM UTC] Q&A
(Q&A)
|
🔗 |
Fri 10:50 a.m. - 11:35 a.m.
|
[05:50 - 06:35 PM UTC] Invited Talk 5: Interpretable Machine Learning: Fundamental Principles And 10 Grand Challenges
(Talk)
SlidesLive Video » |
Cynthia Rudin 🔗 |
Fri 11:35 a.m. - 11:40 a.m.
|
[06:35 - 06:40 PM UTC] Q&A
(Q&A)
|
🔗 |
Fri 11:40 a.m. - 12:25 p.m.
|
[06:40 - 07:25 PM UTC] Invited Talk 6: Deciphering Neural Networks through the Lenses of Feature Interactions
(Talk)
SlidesLive Video » |
Yan Liu 🔗 |
Fri 12:25 p.m. - 12:30 p.m.
|
[07:25 - 07:30 PM UTC] Q&A
(Q&A)
|
🔗 |
Fri 12:30 p.m. - 2:30 p.m.
|
[07:30 - 09:30 PM UTC] Poster Session 2
(Poster Session)
https://eventhosts.gather.town/n75uCc9V3B7JErT7/xai-poster-room-1 https://eventhosts.gather.town/6aHB0V4pPAulNwQz/xai-poster-room-2 |
🔗 |
-
|
A Turing Test for Transparency
(Poster)
[ Visit Poster at Spot A0 in Virtual World ]
SlidesLive Video » |
· Felix Biessmann 🔗 |
-
|
Dynamic Defense Approach for Adversarial Robustness in Deep Neural Networks via Stochastic Ensemble Smoothed Model
(Poster)
[ Visit Poster at Spot A1 in Virtual World ]
SlidesLive Video » |
· Ruoxi Qin 🔗 |
-
|
Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated
(Poster)
[ Visit Poster at Spot A2 in Virtual World ]
SlidesLive Video » |
· Felix Biessmann 🔗 |
-
|
Minimum sharpness: Scale-invariant parameter-robustness of neural networks
(Poster)
[ Visit Poster at Spot A3 in Virtual World ]
SlidesLive Video » |
· Hikaru Ibayashi 🔗 |
-
|
Understanding Instance-based Interpretability of Variational Auto-Encoders
(Poster)
[ Visit Poster at Spot A4 in Virtual World ]
SlidesLive Video » |
· Zhifeng Kong · Kamalika Chaudhuri 🔗 |
-
|
Informative Class Activation Maps: Estimating Mutual Information Between Regions and Labels
(Poster)
[ Visit Poster at Spot A5 in Virtual World ]
SlidesLive Video » |
· Zhenyue Qin 🔗 |
-
|
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
(Poster)
[ Visit Poster at Spot A6 in Virtual World ]
SlidesLive Video » |
· Adrian Hoffmann · Claudio Fanconi · Rahul Rade · Jonas Kohler 🔗 |
-
|
How Not to Measure Disentanglement (Poster) [ Visit Poster at Spot B0 in Virtual World ] link » | · Julia Kiseleva · Maarten de Rijke 🔗 |
-
|
Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates
(Poster)
[ Visit Poster at Spot B1 in Virtual World ]
|
· Dan Ley · Umang Bhatt · Adrian Weller 🔗 |
-
|
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations
(Poster)
[ Visit Poster at Spot B2 in Virtual World ]
SlidesLive Video » |
· Sushant Agarwal · Shahin Jabbari · Chirag Agarwal · Sohini Upadhyay · Steven Wu · Hima Lakkaraju 🔗 |
-
|
Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout
(Poster)
[ Visit Poster at Spot B3 in Virtual World ]
SlidesLive Video » |
· Pengfei Xie 🔗 |
-
|
Interpretable Face Manipulation Detection via Feature Whitening
(Poster)
[ Visit Poster at Spot B4 in Virtual World ]
SlidesLive Video » |
· Yingying Hua · Pengju Wang · Shiming Ge 🔗 |
-
|
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
(Poster)
[ Visit Poster at Spot B5 in Virtual World ]
SlidesLive Video » |
· Yang Liu · Colin White · Willie Neiswanger 🔗 |
-
|
A Probabilistic Representation of DNNs: Bridging Mutual Information and Generalization
(Poster)
[ Visit Poster at Spot B6 in Virtual World ]
SlidesLive Video » |
· Xinjie Lan 🔗 |
-
|
A MaxSAT Approach to Inferring Explainable Temporal Properties
(Poster)
[ Visit Poster at Spot C0 in Virtual World ]
SlidesLive Video » |
· Rajarshi Roy · Zhe Xu · Ufuk Topcu · Jean-Raphaël Gaglione 🔗 |
-
|
Active Automaton Inference for Reinforcement Learning using Queries and Counterexamples
(Poster)
[ Visit Poster at Spot C1 in Virtual World ]
SlidesLive Video » |
· Aditya Ojha · Zhe Xu · Ufuk Topcu 🔗 |
-
|
Learned Interpretable Residual Extragradient ISTA for Sparse Coding
(Poster)
[ Visit Poster at Spot C2 in Virtual World ]
SlidesLive Video » |
· Connie Kong · Fanhua Shang 🔗 |
-
|
Neural Network Classifier as Mutual Information Evaluator
(Poster)
[ Visit Poster at Spot C3 in Virtual World ]
SlidesLive Video » |
· Zhenyue Qin 🔗 |
-
|
Evaluation of Saliency-based Explainability Methods
(Poster)
[ Visit Poster at Spot C4 in Virtual World ]
SlidesLive Video » |
· Sam Zabdiel Samuel · Vidhya Kamakshi · Narayanan Chatapuram Krishnan 🔗 |
-
|
Order in the Court: Explainable AI Methods Prone to Disagreement
(Poster)
[ Visit Poster at Spot C5 in Virtual World ]
SlidesLive Video » |
· Michael Neely · Stefan F. Schouten · Ana Lucic 🔗 |
-
|
On the overlooked issue of defining explanation objectives for local-surrogate explainers
(Poster)
[ Visit Poster at Spot C6 in Virtual World ]
SlidesLive Video » |
· Rafael Poyiadzi · Xavier Renard · Thibault Laugel · Raul Santos-Rodriguez · Marcin Detyniecki 🔗 |
-
|
How Well do Feature Visualizations Support Causal Understanding of CNN Activations?
(Poster)
[ Visit Poster at Spot D0 in Virtual World ]
SlidesLive Video » |
· Roland S. Zimmermann · Judith Borowski · Robert Geirhos · Matthias Bethge · Thomas SA Wallis · Wieland Brendel 🔗 |
-
|
On the Connections between Counterfactual Explanations and Adversarial Examples
(Poster)
[ Visit Poster at Spot D1 in Virtual World ]
SlidesLive Video » |
· Martin Pawelczyk · Shalmali Joshi · Chirag Agarwal · Sohini Upadhyay · Hima Lakkaraju 🔗 |
-
|
Promises and Pitfalls of Black-Box Concept Learning Models
(Poster)
[ Visit Poster at Spot D2 in Virtual World ]
SlidesLive Video » |
· Anita Mahinpei · Justin Clark · Isaac Lage · Finale Doshi-Velez · Weiwei Pan 🔗 |
-
|
On the (Un-)Avoidability of Adversarial Examples
(Poster)
[ Visit Poster at Spot D3 in Virtual World ]
SlidesLive Video » |
· Ruth Urner 🔗 |
-
|
Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations
(Poster)
[ Visit Poster at Spot A0 in Virtual World ]
SlidesLive Video » |
· Chirag Agarwal · Marinka Zitnik · Hima Lakkaraju 🔗 |
-
|
Reliable graph neural network explanations through adversarial training
(Poster)
[ Visit Poster at Spot A1 in Virtual World ]
SlidesLive Video » |
· Donald Loveland · Bhavya Kailkhura · T. Yong-Jin Han 🔗 |
-
|
Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
(Poster)
[ Visit Poster at Spot A2 in Virtual World ]
SlidesLive Video » |
· Sandareka Wickramanayake 🔗 |
-
|
Towards Automated Evaluation of Explanations in Graph Neural Networks
(Poster)
[ Visit Poster at Spot A3 in Virtual World ]
SlidesLive Video » |
· Balaji Ganesan · Devbrat Sharma 🔗 |
-
|
A Source-Criticism Debiasing Method for GloVe Embeddings
(Poster)
[ Visit Poster at Spot A4 in Virtual World ]
SlidesLive Video » |
🔗 |
-
|
Quantitative Evaluation of Explainable Graph Neural Networks for Molecular Property prediction
(Poster)
[ Visit Poster at Spot A5 in Virtual World ]
SlidesLive Video » |
· Jiahua Rao · SHUANGJIA ZHENG 🔗 |
-
|
What will it take to generate fairness-preserving explanations?
(Poster)
[ Visit Poster at Spot A6 in Virtual World ]
SlidesLive Video » |
· Jessica Dai · Sohini Upadhyay · Hima Lakkaraju 🔗 |
-
|
Gradient-Based Interpretability Methods and Binarized Neural Networks
(Poster)
[ Visit Poster at Spot B0 in Virtual World ]
SlidesLive Video » |
· Amy Widdicombe 🔗 |
-
|
Meaningfully Explaining a Model's Mistakes
(Poster)
[ Visit Poster at Spot B1 in Virtual World ]
SlidesLive Video » |
· Abubakar Abid · James Zou 🔗 |
-
|
Feature Attributions and Counterfactual Explanations Can Be Manipulated
(Poster)
[ Visit Poster at Spot B2 in Virtual World ]
SlidesLive Video » |
· Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju 🔗 |
-
|
SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning
(Poster)
[ Visit Poster at Spot B3 in Virtual World ]
SlidesLive Video » |
· Aaron Chan · Xiang Ren 🔗 |
-
|
Re-imagining GNN Explanations with ideas from Tabular Data
(Poster)
[ Visit Poster at Spot B4 in Virtual World ]
SlidesLive Video » |
· Anjali Singh · Shamanth Nayak K · Balaji Ganesan 🔗 |
-
|
Learning Sparse Representations with Alternating Back-Propagation
(Poster)
[ Visit Poster at Spot B5 in Virtual World ]
SlidesLive Video » |
· Tian Han 🔗 |
-
|
Deep Interpretable Criminal Charge Prediction Based on Temporal Trajectory
(Poster)
[ Visit Poster at Spot B6 in Virtual World ]
SlidesLive Video » |
· Jia Xu · Abdul Khan 🔗 |