Workshop
ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI
Quanshi Zhang 路 Tian Han 路 Lixin Fan 路 Zhanxing Zhu 路 Hang Su 路 Ying Nian Wu
Fri 23 Jul, 5 a.m. PDT
The proposed workshop pays a special interests in theoretic foundations, limitations, and new application trends in the scope of XAI. These issues reflect new bottlenecks in the future development of XAI, for example: (1) no theoretic definition of XAI and no solid and widely-used formulation for even a specific explanation task. (2) No sophisticated formulation of the essence of semantics'' encoded in a DNN. (3) How to bridge the gap between connectionism and symbolism in AI research has not been sophisticatedly explored. (4) How to evaluate the correctness and trustworthiness of an explanation result is still an open problem. (5) How to bridge the intuitive explanation (e.g., the attribution/importance-based explanation) and a DNN's representation capacity (e.g., the generalization power) is still a significant challenge. (6) Using the explanation to guide the architecture design or substantially boost the performance of a DNN is a bottleneck. Therefore, this workshop aims to bring together researchers, engineers as well as industrial practitioners, who concern about the interpretability, safety, and reliability of artificial intelligence. In this workshop, we hope to use a broad discussion on the above bottleneck issues to explore new critical and constructive views of the future development of XAI. Research outcomes are also expected to profoundly influences critical industrial applications such as medical diagnosis, finance, and autonomous driving.
Accepted papers: https://arxiv.org/html/2107.08821
Schedule
Fri 4:59 a.m. - 5:00 a.m.
|
Accepted papers ( Proceeding ) > link | 馃敆 |
Fri 5:00 a.m. - 5:02 a.m.
|
[12:00 - 12:02 PM UTC] Welcome
(
Demonstration
)
>
|
Quanshi Zhang 馃敆 |
Fri 5:02 a.m. - 5:47 a.m.
|
[12:02 - 12:47 PM UTC] Invited Talk 1: Explainable AI: How Machines Gain Justified Trust from Humans
(
Talk
)
>
SlidesLive Video |
Song-Chun Zhu 馃敆 |
Fri 5:47 a.m. - 5:52 a.m.
|
[12:47 - 12:52 PM UTC] Q&A
(
Q&A
)
>
|
馃敆 |
Fri 5:52 a.m. - 6:45 a.m.
|
[12:52 - 01:45 PM UTC] Invited Talk 2: Toward Explainable AI
(
Talk
)
>
SlidesLive Video |
Klaus-robert Mueller 路 Wojciech Samek 路 Gr茅goire Montavon 馃敆 |
Fri 6:45 a.m. - 6:50 a.m.
|
[01:45 - 01:50 PM UTC] Q&A
(
Q&A
)
>
|
馃敆 |
Fri 6:50 a.m. - 7:35 a.m.
|
[01:50 - 02:35 PM UTC] Invited Talk 3: Interpretability in High Dimensions: Concept Bottlenecks and Beyond
(
Talk
)
>
SlidesLive Video |
Finale Doshi-Velez 馃敆 |
Fri 7:35 a.m. - 7:40 a.m.
|
[02:35 - 02:40 PM UTC] Q&A
(
Q&A
)
>
|
馃敆 |
Fri 7:40 a.m. - 9:58 a.m.
|
[02:40 - 04:58 PM UTC] Poster Session 1
(
Poster Session
)
>
|
馃敆 |
Fri 9:58 a.m. - 10:00 a.m.
|
A short introduction of speakers
(
Demonstration
)
>
|
馃敆 |
Fri 10:00 a.m. - 10:45 a.m.
|
[05:00 - 05:45 PM UTC] Invited Talk 4: Analysis Not Explainability
(
Talk
)
>
SlidesLive Video |
Mukund Sundararajan 馃敆 |
Fri 10:45 a.m. - 10:50 a.m.
|
[05:45 - 05:50 PM UTC] Q&A
(
Q&A
)
>
|
馃敆 |
Fri 10:50 a.m. - 11:35 a.m.
|
[05:50 - 06:35 PM UTC] Invited Talk 5: Interpretable Machine Learning: Fundamental Principles And 10 Grand Challenges
(
Talk
)
>
SlidesLive Video |
Cynthia Rudin 馃敆 |
Fri 11:35 a.m. - 11:40 a.m.
|
[06:35 - 06:40 PM UTC] Q&A
(
Q&A
)
>
|
馃敆 |
Fri 11:40 a.m. - 12:25 p.m.
|
[06:40 - 07:25 PM UTC] Invited Talk 6: Deciphering Neural Networks through the Lenses of Feature Interactions
(
Talk
)
>
SlidesLive Video |
Yan Liu 馃敆 |
Fri 12:25 p.m. - 12:30 p.m.
|
[07:25 - 07:30 PM UTC] Q&A
(
Q&A
)
>
|
馃敆 |
Fri 12:30 p.m. - 2:30 p.m.
|
[07:30 - 09:30 PM UTC] Poster Session 2
(
Poster Session
)
>
|
馃敆 |
-
|
A Turing Test for Transparency
(
Poster
)
>
SlidesLive Video |
路 Felix Biessmann 馃敆 |
-
|
Dynamic Defense Approach for Adversarial Robustness in Deep Neural Networks via Stochastic Ensemble Smoothed Model
(
Poster
)
>
SlidesLive Video |
路 Ruoxi Qin 馃敆 |
-
|
Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated
(
Poster
)
>
SlidesLive Video |
路 Felix Biessmann 馃敆 |
-
|
Minimum sharpness: Scale-invariant parameter-robustness of neural networks
(
Poster
)
>
SlidesLive Video |
路 Hikaru Ibayashi 馃敆 |
-
|
Understanding Instance-based Interpretability of Variational Auto-Encoders
(
Poster
)
>
SlidesLive Video |
路 Zhifeng Kong 路 Kamalika Chaudhuri 馃敆 |
-
|
Informative Class Activation Maps: Estimating Mutual Information Between Regions and Labels
(
Poster
)
>
SlidesLive Video |
路 Zhenyue Qin 馃敆 |
-
|
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
(
Poster
)
>
SlidesLive Video |
路 Adrian Hoffmann 路 Claudio Fanconi 路 Rahul Rade 路 Jonas Kohler 馃敆 |
-
|
How Not to Measure Disentanglement ( Poster ) > link | 路 Julia Kiseleva 路 Maarten de Rijke 馃敆 |
-
|
Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates
(
Poster
)
>
|
路 Dan Ley 路 Umang Bhatt 路 Adrian Weller 馃敆 |
-
|
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations
(
Poster
)
>
SlidesLive Video |
路 Sushant Agarwal 路 Shahin Jabbari 路 Chirag Agarwal 路 Sohini Upadhyay 路 Steven Wu 路 Hima Lakkaraju 馃敆 |
-
|
Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout
(
Poster
)
>
SlidesLive Video |
路 Pengfei Xie 馃敆 |
-
|
Interpretable Face Manipulation Detection via Feature Whitening
(
Poster
)
>
SlidesLive Video |
路 Yingying Hua 路 Pengju Wang 路 Shiming Ge 馃敆 |
-
|
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
(
Poster
)
>
SlidesLive Video |
路 Yang Liu 路 Colin White 路 Willie Neiswanger 馃敆 |
-
|
A Probabilistic Representation of DNNs: Bridging Mutual Information and Generalization
(
Poster
)
>
SlidesLive Video |
路 Xinjie Lan 馃敆 |
-
|
A MaxSAT Approach to Inferring Explainable Temporal Properties
(
Poster
)
>
SlidesLive Video |
路 Rajarshi Roy 路 Zhe Xu 路 Ufuk Topcu 路 Jean-Rapha毛l Gaglione 馃敆 |
-
|
Active Automaton Inference for Reinforcement Learning using Queries and Counterexamples
(
Poster
)
>
SlidesLive Video |
路 Aditya Ojha 路 Zhe Xu 路 Ufuk Topcu 馃敆 |
-
|
Learned Interpretable Residual Extragradient ISTA for Sparse Coding
(
Poster
)
>
SlidesLive Video |
路 Connie Kong 路 Fanhua Shang 馃敆 |
-
|
Neural Network Classifier as Mutual Information Evaluator
(
Poster
)
>
SlidesLive Video |
路 Zhenyue Qin 馃敆 |
-
|
Evaluation of Saliency-based Explainability Methods
(
Poster
)
>
SlidesLive Video |
路 Sam Zabdiel Samuel 路 Vidhya Kamakshi 路 Narayanan Chatapuram Krishnan 馃敆 |
-
|
Order in the Court: Explainable AI Methods Prone to Disagreement
(
Poster
)
>
SlidesLive Video |
路 Michael Neely 路 Stefan F. Schouten 路 Ana Lucic 馃敆 |
-
|
On the overlooked issue of defining explanation objectives for local-surrogate explainers
(
Poster
)
>
SlidesLive Video |
路 Rafael Poyiadzi 路 Xavier Renard 路 Thibault Laugel 路 Raul Santos-Rodriguez 路 Marcin Detyniecki 馃敆 |
-
|
How Well do Feature Visualizations Support Causal Understanding of CNN Activations?
(
Poster
)
>
SlidesLive Video |
路 Roland S. Zimmermann 路 Judith Borowski 路 Robert Geirhos 路 Matthias Bethge 路 Thomas SA Wallis 路 Wieland Brendel 馃敆 |
-
|
On the Connections between Counterfactual Explanations and Adversarial Examples
(
Poster
)
>
SlidesLive Video |
路 Martin Pawelczyk 路 Shalmali Joshi 路 Chirag Agarwal 路 Sohini Upadhyay 路 Hima Lakkaraju 馃敆 |
-
|
Promises and Pitfalls of Black-Box Concept Learning Models
(
Poster
)
>
SlidesLive Video |
路 Anita Mahinpei 路 Justin Clark 路 Isaac Lage 路 Finale Doshi-Velez 路 Weiwei Pan 馃敆 |
-
|
On the (Un-)Avoidability of Adversarial Examples
(
Poster
)
>
SlidesLive Video |
路 Ruth Urner 馃敆 |
-
|
Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations
(
Poster
)
>
SlidesLive Video |
路 Chirag Agarwal 路 Marinka Zitnik 路 Hima Lakkaraju 馃敆 |
-
|
Reliable graph neural network explanations through adversarial training
(
Poster
)
>
SlidesLive Video |
路 Donald Loveland 路 Bhavya Kailkhura 路 T. Yong-Jin Han 馃敆 |
-
|
Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
(
Poster
)
>
SlidesLive Video |
路 Sandareka Wickramanayake 馃敆 |
-
|
Towards Automated Evaluation of Explanations in Graph Neural Networks
(
Poster
)
>
SlidesLive Video |
路 Balaji Ganesan 路 Devbrat Sharma 馃敆 |
-
|
A Source-Criticism Debiasing Method for GloVe Embeddings
(
Poster
)
>
SlidesLive Video |
馃敆 |
-
|
Quantitative Evaluation of Explainable Graph Neural Networks for Molecular Property prediction
(
Poster
)
>
SlidesLive Video |
路 Jiahua Rao 路 SHUANGJIA ZHENG 馃敆 |
-
|
What will it take to generate fairness-preserving explanations?
(
Poster
)
>
SlidesLive Video |
路 Jessica Dai 路 Sohini Upadhyay 路 Hima Lakkaraju 馃敆 |
-
|
Gradient-Based Interpretability Methods and Binarized Neural Networks
(
Poster
)
>
SlidesLive Video |
路 Amy Widdicombe 馃敆 |
-
|
Meaningfully Explaining a Model's Mistakes
(
Poster
)
>
SlidesLive Video |
路 Abubakar Abid 路 James Zou 馃敆 |
-
|
Feature Attributions and Counterfactual Explanations Can Be Manipulated
(
Poster
)
>
SlidesLive Video |
路 Dylan Slack 路 Sophie Hilgard 路 Sameer Singh 路 Hima Lakkaraju 馃敆 |
-
|
SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning
(
Poster
)
>
SlidesLive Video |
路 Aaron Chan 路 Xiang Ren 馃敆 |
-
|
Re-imagining GNN Explanations with ideas from Tabular Data
(
Poster
)
>
SlidesLive Video |
路 Anjali Singh 路 Shamanth Nayak K 路 Balaji Ganesan 馃敆 |
-
|
Learning Sparse Representations with Alternating Back-Propagation
(
Poster
)
>
SlidesLive Video |
路 Tian Han 馃敆 |
-
|
Deep Interpretable Criminal Charge Prediction Based on Temporal Trajectory
(
Poster
)
>
SlidesLive Video |
路 Jia Xu 路 Abdul Khan 馃敆 |