Timezone: »
Evaluation of Saliency-based Explainability Methods
· Sam Zabdiel Samuel · Vidhya Kamakshi · Narayanan Chatapuram Krishnan
Author Information
Sam Zabdiel Samuel (Indian Institute of Technology Ropar)
Vidhya Kamakshi (Indian Institute of Technology Ropar)
Narayanan Chatapuram Krishnan (Indian Institute of Technology Ropar)
More from the Same Authors
-
2021 : Poster Session Test »
Jie Ren · -
2021 : A Turing Test for Transparency »
· Felix Biessmann -
2021 : Dynamic Defense Approach for Adversarial Robustness in Deep Neural Networks via Stochastic Ensemble Smoothed Model »
· Ruoxi Qin -
2021 : Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated »
· Felix Biessmann -
2021 : Minimum sharpness: Scale-invariant parameter-robustness of neural networks »
· Hikaru Ibayashi -
2021 : Understanding Instance-based Interpretability of Variational Auto-Encoders »
· Zhifeng Kong · Kamalika Chaudhuri -
2021 : Informative Class Activation Maps: Estimating Mutual Information Between Regions and Labels »
· Zhenyue Qin -
2021 : This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks »
· Adrian Hoffmann · Claudio Fanconi · Rahul Rade · Jonas Kohler -
2021 : How Not to Measure Disentanglement »
· Julia Kiseleva · Maarten de Rijke -
2021 : Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates »
· Dan Ley · Umang Bhatt · Adrian Weller -
2021 : Towards the Unification and Robustness of Perturbation and Gradient Based Explanations »
· Sushant Agarwal · Shahin Jabbari · Chirag Agarwal · Sohini Upadhyay · Steven Wu · Hima Lakkaraju -
2021 : Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout »
· Pengfei Xie -
2021 : Interpretable Face Manipulation Detection via Feature Whitening »
· Yingying Hua · Pengju Wang · Shiming Ge -
2021 : Synthetic Benchmarks for Scientific Research in Explainable Machine Learning »
· Yang Liu · Colin White · Willie Neiswanger -
2021 : A Probabilistic Representation of DNNs: Bridging Mutual Information and Generalization »
· Xinjie Lan -
2021 : A MaxSAT Approach to Inferring Explainable Temporal Properties »
· Rajarshi Roy · Zhe Xu · Ufuk Topcu · Jean-RaphaĆ«l Gaglione -
2021 : Active Automaton Inference for Reinforcement Learning using Queries and Counterexamples »
· Aditya Ojha · Zhe Xu · Ufuk Topcu -
2021 : Learned Interpretable Residual Extragradient ISTA for Sparse Coding »
· Connie Kong · Fanhua Shang -
2021 : Neural Network Classifier as Mutual Information Evaluator »
· Zhenyue Qin -
2021 : Order in the Court: Explainable AI Methods Prone to Disagreement »
· Michael Neely · Stefan F. Schouten · Ana Lucic -
2021 : On the overlooked issue of defining explanation objectives for local-surrogate explainers »
· Rafael Poyiadzi · Xavier Renard · Thibault Laugel · Raul Santos-Rodriguez · Marcin Detyniecki -
2021 : How Well do Feature Visualizations Support Causal Understanding of CNN Activations? »
· Roland S. Zimmermann · Judith Borowski · Robert Geirhos · Matthias Bethge · Thomas SA Wallis · Wieland Brendel -
2021 : On the Connections between Counterfactual Explanations and Adversarial Examples »
· Martin Pawelczyk · Shalmali Joshi · Chirag Agarwal · Sohini Upadhyay · Hima Lakkaraju -
2021 : Promises and Pitfalls of Black-Box Concept Learning Models »
· Anita Mahinpei · Justin Clark · Isaac Lage · Finale Doshi-Velez · Weiwei Pan -
2021 : On the (Un-)Avoidability of Adversarial Examples »
· Ruth Urner -
2021 : Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations »
· Chirag Agarwal · Marinka Zitnik · Hima Lakkaraju -
2021 : Reliable graph neural network explanations through adversarial training »
· Donald Loveland · Bhavya Kailkhura · T. Yong-Jin Han -
2021 : Towards Fully Interpretable Deep Neural Networks: Are We There Yet? »
· Sandareka Wickramanayake -
2021 : Towards Automated Evaluation of Explanations in Graph Neural Networks »
· Balaji Ganesan · Devbrat Sharma -
2021 : A Source-Criticism Debiasing Method for GloVe Embeddings »
-
2021 : Quantitative Evaluation of Explainable Graph Neural Networks for Molecular Property prediction »
· Jiahua Rao · SHUANGJIA ZHENG -
2021 : What will it take to generate fairness-preserving explanations? »
· Jessica Dai · Sohini Upadhyay · Hima Lakkaraju -
2021 : Gradient-Based Interpretability Methods and Binarized Neural Networks »
· Amy Widdicombe -
2021 : Meaningfully Explaining a Model's Mistakes »
· Abubakar Abid · James Zou -
2021 : Feature Attributions and Counterfactual Explanations Can Be Manipulated »
· Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju -
2021 : SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning »
· Aaron Chan · Xiang Ren -
2021 : Re-imagining GNN Explanations with ideas from Tabular Data »
· Anjali Singh · Shamanth Nayak K · Balaji Ganesan -
2021 : Learning Sparse Representations with Alternating Back-Propagation »
· Tian Han -
2021 : Deep Interpretable Criminal Charge Prediction Based on Temporal Trajectory »
· Jia Xu · Abdul Khan -
2021 Poster: On Characterizing GAN Convergence Through Proximal Duality Gap »
Sahil Sidheekh · Aroof Aimen · Narayanan Chatapuram Krishnan -
2021 Spotlight: On Characterizing GAN Convergence Through Proximal Duality Gap »
Sahil Sidheekh · Aroof Aimen · Narayanan Chatapuram Krishnan