Timezone: »
The proposed workshop pays a special interests in theoretic foundations, limitations, and new application trends in the scope of XAI. These issues reflect new bottlenecks in the future development of XAI, for example: (1) no theoretic definition of XAI and no solid and widely-used formulation for even a specific explanation task. (2) No sophisticated formulation of the essence of ``semantics'' encoded in a DNN. (3) How to bridge the gap between connectionism and symbolism in AI research has not been sophisticatedly explored. (4) How to evaluate the correctness and trustworthiness of an explanation result is still an open problem. (5) How to bridge the intuitive explanation (e.g., the attribution/importance-based explanation) and a DNN's representation capacity (e.g., the generalization power) is still a significant challenge. (6) Using the explanation to guide the architecture design or substantially boost the performance of a DNN is a bottleneck. Therefore, this workshop aims to bring together researchers, engineers as well as industrial practitioners, who concern about the interpretability, safety, and reliability of artificial intelligence. In this workshop, we hope to use a broad discussion on the above bottleneck issues to explore new critical and constructive views of the future development of XAI. Research outcomes are also expected to profoundly influences critical industrial applications such as medical diagnosis, finance, and autonomous driving.
Accepted papers: https://arxiv.org/html/2107.08821
Fri 4:59 a.m. - 5:00 a.m.
|
Accepted papers ( Proceeding ) link » | 🔗 |
Fri 5:00 a.m. - 5:02 a.m.
|
[12:00 - 12:02 PM UTC] Welcome
(
Demonstration
)
|
Quanshi Zhang 🔗 |
Fri 5:02 a.m. - 5:47 a.m.
|
[12:02 - 12:47 PM UTC] Invited Talk 1: Explainable AI: How Machines Gain Justified Trust from Humans
(
Talk
)
SlidesLive Video » |
Song-Chun Zhu 🔗 |
Fri 5:47 a.m. - 5:52 a.m.
|
[12:47 - 12:52 PM UTC] Q&A
(
Q&A
)
|
🔗 |
Fri 5:52 a.m. - 6:45 a.m.
|
[12:52 - 01:45 PM UTC] Invited Talk 2: Toward Explainable AI
(
Talk
)
SlidesLive Video » |
Klaus-robert Mueller · Wojciech Samek · GrĂ©goire Montavon 🔗 |
Fri 6:45 a.m. - 6:50 a.m.
|
[01:45 - 01:50 PM UTC] Q&A
(
Q&A
)
|
🔗 |
Fri 6:50 a.m. - 7:35 a.m.
|
[01:50 - 02:35 PM UTC] Invited Talk 3: Interpretability in High Dimensions: Concept Bottlenecks and Beyond
(
Talk
)
SlidesLive Video » |
Finale Doshi-Velez 🔗 |
Fri 7:35 a.m. - 7:40 a.m.
|
[02:35 - 02:40 PM UTC] Q&A
(
Q&A
)
|
🔗 |
Fri 7:40 a.m. - 9:58 a.m.
|
[02:40 - 04:58 PM UTC] Poster Session 1
(
Poster Session
)
[ protected link dropped ] [ protected link dropped ] |
🔗 |
Fri 9:58 a.m. - 10:00 a.m.
|
A short introduction of speakers
(
Demonstration
)
|
🔗 |
Fri 10:00 a.m. - 10:45 a.m.
|
[05:00 - 05:45 PM UTC] Invited Talk 4: Analysis Not Explainability
(
Talk
)
SlidesLive Video » |
Mukund Sundararajan 🔗 |
Fri 10:45 a.m. - 10:50 a.m.
|
[05:45 - 05:50 PM UTC] Q&A
(
Q&A
)
|
🔗 |
Fri 10:50 a.m. - 11:35 a.m.
|
[05:50 - 06:35 PM UTC] Invited Talk 5: Interpretable Machine Learning: Fundamental Principles And 10 Grand Challenges
(
Talk
)
SlidesLive Video » |
Cynthia Rudin 🔗 |
Fri 11:35 a.m. - 11:40 a.m.
|
[06:35 - 06:40 PM UTC] Q&A
(
Q&A
)
|
🔗 |
Fri 11:40 a.m. - 12:25 p.m.
|
[06:40 - 07:25 PM UTC] Invited Talk 6: Deciphering Neural Networks through the Lenses of Feature Interactions
(
Talk
)
SlidesLive Video » |
Yan Liu 🔗 |
Fri 12:25 p.m. - 12:30 p.m.
|
[07:25 - 07:30 PM UTC] Q&A
(
Q&A
)
|
🔗 |
Fri 12:30 p.m. - 2:30 p.m.
|
[07:30 - 09:30 PM UTC] Poster Session 2
(
Poster Session
)
[ protected link dropped ] [ protected link dropped ] |
🔗 |
-
|
A Turing Test for Transparency
(
Poster
)
SlidesLive Video » |
· Felix Biessmann 🔗 |
-
|
Dynamic Defense Approach for Adversarial Robustness in Deep Neural Networks via Stochastic Ensemble Smoothed Model
(
Poster
)
SlidesLive Video » |
· Ruoxi Qin 🔗 |
-
|
Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated
(
Poster
)
SlidesLive Video » |
· Felix Biessmann 🔗 |
-
|
Minimum sharpness: Scale-invariant parameter-robustness of neural networks
(
Poster
)
SlidesLive Video » |
· Hikaru Ibayashi 🔗 |
-
|
Understanding Instance-based Interpretability of Variational Auto-Encoders
(
Poster
)
SlidesLive Video » |
· Zhifeng Kong · Kamalika Chaudhuri 🔗 |
-
|
Informative Class Activation Maps: Estimating Mutual Information Between Regions and Labels
(
Poster
)
SlidesLive Video » |
· Zhenyue Qin 🔗 |
-
|
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
(
Poster
)
SlidesLive Video » |
· Adrian Hoffmann · Claudio Fanconi · Rahul Rade · Jonas Kohler 🔗 |
-
|
How Not to Measure Disentanglement ( Poster ) link » | · Julia Kiseleva · Maarten de Rijke 🔗 |
-
|
Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates
(
Poster
)
|
· Dan Ley · Umang Bhatt · Adrian Weller 🔗 |
-
|
Towards the Unification and Robustness of Perturbation and Gradient Based Explanations
(
Poster
)
SlidesLive Video » |
· Sushant Agarwal · Shahin Jabbari · Chirag Agarwal · Sohini Upadhyay · Steven Wu · Hima Lakkaraju 🔗 |
-
|
Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout
(
Poster
)
SlidesLive Video » |
· Pengfei Xie 🔗 |
-
|
Interpretable Face Manipulation Detection via Feature Whitening
(
Poster
)
SlidesLive Video » |
· Yingying Hua · Pengju Wang · Shiming Ge 🔗 |
-
|
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
(
Poster
)
SlidesLive Video » |
· Yang Liu · Colin White · Willie Neiswanger 🔗 |
-
|
A Probabilistic Representation of DNNs: Bridging Mutual Information and Generalization
(
Poster
)
SlidesLive Video » |
· Xinjie Lan 🔗 |
-
|
A MaxSAT Approach to Inferring Explainable Temporal Properties
(
Poster
)
SlidesLive Video » |
· Rajarshi Roy · Zhe Xu · Ufuk Topcu · Jean-RaphaĂ«l Gaglione 🔗 |
-
|
Active Automaton Inference for Reinforcement Learning using Queries and Counterexamples
(
Poster
)
SlidesLive Video » |
· Aditya Ojha · Zhe Xu · Ufuk Topcu 🔗 |
-
|
Learned Interpretable Residual Extragradient ISTA for Sparse Coding
(
Poster
)
SlidesLive Video » |
· Connie Kong · Fanhua Shang 🔗 |
-
|
Neural Network Classifier as Mutual Information Evaluator
(
Poster
)
SlidesLive Video » |
· Zhenyue Qin 🔗 |
-
|
Evaluation of Saliency-based Explainability Methods
(
Poster
)
SlidesLive Video » |
· Sam Zabdiel Samuel · Vidhya Kamakshi · Narayanan Chatapuram Krishnan 🔗 |
-
|
Order in the Court: Explainable AI Methods Prone to Disagreement
(
Poster
)
SlidesLive Video » |
· Michael Neely · Stefan F. Schouten · Ana Lucic 🔗 |
-
|
On the overlooked issue of defining explanation objectives for local-surrogate explainers
(
Poster
)
SlidesLive Video » |
· Rafael Poyiadzi · Xavier Renard · Thibault Laugel · Raul Santos-Rodriguez · Marcin Detyniecki 🔗 |
-
|
How Well do Feature Visualizations Support Causal Understanding of CNN Activations?
(
Poster
)
SlidesLive Video » |
· Roland S. Zimmermann · Judith Borowski · Robert Geirhos · Matthias Bethge · Thomas SA Wallis · Wieland Brendel 🔗 |
-
|
On the Connections between Counterfactual Explanations and Adversarial Examples
(
Poster
)
SlidesLive Video » |
· Martin Pawelczyk · Shalmali Joshi · Chirag Agarwal · Sohini Upadhyay · Hima Lakkaraju 🔗 |
-
|
Promises and Pitfalls of Black-Box Concept Learning Models
(
Poster
)
SlidesLive Video » |
· Anita Mahinpei · Justin Clark · Isaac Lage · Finale Doshi-Velez · Weiwei Pan 🔗 |
-
|
On the (Un-)Avoidability of Adversarial Examples
(
Poster
)
SlidesLive Video » |
· Ruth Urner 🔗 |
-
|
Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations
(
Poster
)
SlidesLive Video » |
· Chirag Agarwal · Marinka Zitnik · Hima Lakkaraju 🔗 |
-
|
Reliable graph neural network explanations through adversarial training
(
Poster
)
SlidesLive Video » |
· Donald Loveland · Bhavya Kailkhura · T. Yong-Jin Han 🔗 |
-
|
Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
(
Poster
)
SlidesLive Video » |
· Sandareka Wickramanayake 🔗 |
-
|
Towards Automated Evaluation of Explanations in Graph Neural Networks
(
Poster
)
SlidesLive Video » |
· Balaji Ganesan · Devbrat Sharma 🔗 |
-
|
A Source-Criticism Debiasing Method for GloVe Embeddings
(
Poster
)
SlidesLive Video » |
🔗 |
-
|
Quantitative Evaluation of Explainable Graph Neural Networks for Molecular Property prediction
(
Poster
)
SlidesLive Video » |
· Jiahua Rao · SHUANGJIA ZHENG 🔗 |
-
|
What will it take to generate fairness-preserving explanations?
(
Poster
)
SlidesLive Video » |
· Jessica Dai · Sohini Upadhyay · Hima Lakkaraju 🔗 |
-
|
Gradient-Based Interpretability Methods and Binarized Neural Networks
(
Poster
)
SlidesLive Video » |
· Amy Widdicombe 🔗 |
-
|
Meaningfully Explaining a Model's Mistakes
(
Poster
)
SlidesLive Video » |
· Abubakar Abid · James Zou 🔗 |
-
|
Feature Attributions and Counterfactual Explanations Can Be Manipulated
(
Poster
)
SlidesLive Video » |
· Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju 🔗 |
-
|
SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning
(
Poster
)
SlidesLive Video » |
· Aaron Chan · Xiang Ren 🔗 |
-
|
Re-imagining GNN Explanations with ideas from Tabular Data
(
Poster
)
SlidesLive Video » |
· Anjali Singh · Shamanth Nayak K · Balaji Ganesan 🔗 |
-
|
Learning Sparse Representations with Alternating Back-Propagation
(
Poster
)
SlidesLive Video » |
· Tian Han 🔗 |
-
|
Deep Interpretable Criminal Charge Prediction Based on Temporal Trajectory
(
Poster
)
SlidesLive Video » |
· Jia Xu · Abdul Khan 🔗 |
Author Information
Quanshi Zhang (Shanghai Jiao Tong University)
Tian Han (Stevens Institute of Technology)
Lixin Fan (WeBank)
Zhanxing Zhu (Peking University)
Hang Su (Tsinghua University)
Ying Nian Wu (UCLA)
More from the Same Authors
-
2021 : Learning Sparse Representations with Alternating Back-Propagation »
· Tian Han -
2023 Poster: On the Complexity of Bayesian Generalization »
Yu-Zhe Shi · Manjie Xu · John Hopcroft · Kun He · Josh Tenenbaum · Song-Chun Zhu · Ying Nian Wu · Wenjuan Han · Yixin Zhu -
2023 Poster: MultiAdam: Parameter-wise Scale-invariant Optimizer for Multiscale Training of Physics-informed Neural Networks »
Jiachen Yao · Chang Su · Zhongkai Hao · LIU SONGMING · Hang Su · Jun Zhu -
2023 Poster: Defects of Convolutional Decoder Networks in Frequency Representation »
Ling Tang · Wen Shen · Zhanpeng Zhou · YueFeng Chen · Quanshi Zhang -
2023 Poster: Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning »
Cheng Lu · Huayu Chen · Jianfei Chen · Hang Su · Chongxuan Li · Jun Zhu -
2023 Poster: MonoFlow: Rethinking Divergence GANs via the Perspective of Wasserstein Gradient Flows »
Mingxuan Yi · Zhanxing Zhu · Song Liu -
2023 Poster: HarsanyiNet: Computing Accurate Shapley Values in a Single Forward Propagation »
Lu Chen · Siyu Lou · Keyan Zhang · JIN HUANG · Quanshi Zhang -
2023 Poster: Does a Neural Network Really Encode Symbolic Concepts? »
Mingjie Li · Quanshi Zhang -
2023 Poster: NUNO: A General Framework for Learning Parametric PDEs with Non-Uniform Data »
LIU SONGMING · Zhongkai Hao · Chengyang Ying · Hang Su · Ze Cheng · Jun Zhu -
2023 Poster: Bayesian Neural Networks Avoid Encoding Complex and Perturbation-Sensitive Concepts »
Qihan Ren · Huiqi Deng · Yunuo Chen · Siyu Lou · Quanshi Zhang -
2023 Poster: One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale »
Fan Bao · Shen Nie · Kaiwen Xue · Chongxuan Li · Shi Pu · Yaole Wang · Gang Yue · Yue Cao · Hang Su · Jun Zhu -
2023 Poster: GNOT: A General Neural Operator Transformer for Operator Learning »
Zhongkai Hao · Zhengyi Wang · Hang Su · Chengyang Ying · Yinpeng Dong · LIU SONGMING · Ze Cheng · Jian Song · Jun Zhu -
2023 Poster: Diverse and Faithful Knowledge-Grounded Dialogue Generation via Sequential Posterior Inference »
Yan Xu · Deqian Kong · Dehong Xu · Ziwei Ji · Bo Pang · Pascale FUNG · Ying Nian Wu -
2022 Poster: Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs »
Jie Ren · Mingjie Li · Meng Zhou · Shih-Han Chan · Quanshi Zhang -
2022 Spotlight: Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs »
Jie Ren · Mingjie Li · Meng Zhou · Shih-Han Chan · Quanshi Zhang -
2022 Poster: GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing »
Zhongkai Hao · Chengyang Ying · Yinpeng Dong · Hang Su · Jian Song · Jun Zhu -
2022 Poster: Latent Diffusion Energy-Based Model for Interpretable Text Modelling »
Peiyu Yu · Sirui Xie · Xiaojian Ma · Baoxiong Jia · Bo Pang · Ruiqi Gao · Yixin Zhu · Song-Chun Zhu · Ying Nian Wu -
2022 Poster: Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding »
Haotian Ma · Hao Zhang · Fan Zhou · Yinqing Zhang · Quanshi Zhang -
2022 Spotlight: GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing »
Zhongkai Hao · Chengyang Ying · Yinpeng Dong · Hang Su · Jian Song · Jun Zhu -
2022 Spotlight: Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding »
Haotian Ma · Hao Zhang · Fan Zhou · Yinqing Zhang · Quanshi Zhang -
2022 Spotlight: Latent Diffusion Energy-Based Model for Interpretable Text Modelling »
Peiyu Yu · Sirui Xie · Xiaojian Ma · Baoxiong Jia · Bo Pang · Ruiqi Gao · Yixin Zhu · Song-Chun Zhu · Ying Nian Wu -
2021 : Discussion Panel #1 »
Hang Su · Matthias Hein · Liwei Wang · Sven Gowal · Jan Hendrik Metzen · Henry Liu · Yisen Wang -
2021 Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning »
Hang Su · Yinpeng Dong · Tianyu Pang · Eric Wong · Zico Kolter · Shuo Feng · Bo Li · Henry Liu · Dan Hendrycks · Francesco Croce · Leslie Rice · Tian Tian -
2021 : Opening Remarks »
Hang Su -
2021 : [12:00 - 12:02 PM UTC] Welcome »
Quanshi Zhang -
2021 Poster: Latent Space Energy-Based Model of Symbol-Vector Coupling for Text Generation and Classification »
Bo Pang · Ying Nian Wu -
2021 Spotlight: Latent Space Energy-Based Model of Symbol-Vector Coupling for Text Generation and Classification »
Bo Pang · Ying Nian Wu -
2021 Poster: Interpreting and Disentangling Feature Components of Various Complexity from DNNs »
Jie Ren · Mingjie Li · Zexu Liu · Quanshi Zhang -
2021 Spotlight: Interpreting and Disentangling Feature Components of Various Complexity from DNNs »
Jie Ren · Mingjie Li · Zexu Liu · Quanshi Zhang -
2021 Poster: Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization »
Zeke Xie · Li Yuan · Zhanxing Zhu · Masashi Sugiyama -
2021 Spotlight: Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization »
Zeke Xie · Li Yuan · Zhanxing Zhu · Masashi Sugiyama -
2020 Poster: On Breaking Deep Generative Model-based Defenses and Beyond »
Yanzhi Chen · Renjie Xie · Zhanxing Zhu -
2020 Poster: Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning »
Qing Li · Siyuan Huang · Yining Hong · Yixin Chen · Ying Nian Wu · Song-Chun Zhu -
2020 Poster: Informative Dropout for Robust Representation Learning: A Shape-bias Perspective »
Baifeng Shi · Dinghuai Zhang · Qi Dai · Zhanxing Zhu · Yadong Mu · Jingdong Wang -
2020 Poster: On the Noisy Gradient Descent that Generalizes as SGD »
Jingfeng Wu · Wenqing Hu · Haoyi Xiong · Jun Huan · Vladimir Braverman · Zhanxing Zhu -
2020 Expo Talk Panel: Baidu AutoDL: Automated and Interpretable Deep Learning »
Bolei Zhou · Yi Yang · Quanshi Zhang · Dejing Dou · Haoyi Xiong · Jiahui Yu · Humphrey Shi · Linchao Zhu · Xingjian Li -
2019 Poster: Towards a Deep and Unified Understanding of Deep Neural Models in NLP »
Chaoyu Guan · Xiting Wang · Quanshi Zhang · Runjin Chen · Di He · Xing Xie -
2019 Oral: Towards a Deep and Unified Understanding of Deep Neural Models in NLP »
Chaoyu Guan · Xiting Wang · Quanshi Zhang · Runjin Chen · Di He · Xing Xie -
2019 Poster: Interpreting Adversarially Trained Convolutional Neural Networks »
Tianyuan Zhang · Zhanxing Zhu -
2019 Poster: The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Sharp Minima and Regularization Effects »
Zhanxing Zhu · Jingfeng Wu · Bing Yu · Lei Wu · Jinwen Ma -
2019 Oral: Interpreting Adversarially Trained Convolutional Neural Networks »
Tianyuan Zhang · Zhanxing Zhu -
2019 Oral: The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Sharp Minima and Regularization Effects »
Zhanxing Zhu · Jingfeng Wu · Bing Yu · Lei Wu · Jinwen Ma