Timezone: »
Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates
· Dan Ley · Umang Bhatt · Adrian Weller
Author Information
Dan Ley (University of Cambridge)
Umang Bhatt (University of Cambridge)
Adrian Weller (University of Cambridge, Alan Turing Institute)

Adrian Weller is Programme Director for AI at The Alan Turing Institute, the UK national institute for data science and AI, and is a Turing AI Fellow leading work on trustworthy Machine Learning (ML). He is a Principal Research Fellow in ML at the University of Cambridge, and at the Leverhulme Centre for the Future of Intelligence where he is Programme Director for Trust and Society. His interests span AI, its commercial applications and helping to ensure beneficial outcomes for society. Previously, Adrian held senior roles in finance. He received a PhD in computer science from Columbia University, and an undergraduate degree in mathematics from Trinity College, Cambridge.
More from the Same Authors
-
2021 : Poster Session Test »
Jie Ren · -
2021 : A Turing Test for Transparency »
· Felix Biessmann -
2021 : Dynamic Defense Approach for Adversarial Robustness in Deep Neural Networks via Stochastic Ensemble Smoothed Model »
· Ruoxi Qin -
2021 : Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated »
· Felix Biessmann -
2021 : Minimum sharpness: Scale-invariant parameter-robustness of neural networks »
· Hikaru Ibayashi -
2021 : Understanding Instance-based Interpretability of Variational Auto-Encoders »
· Zhifeng Kong · Kamalika Chaudhuri -
2021 : Informative Class Activation Maps: Estimating Mutual Information Between Regions and Labels »
· Zhenyue Qin -
2021 : This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks »
· Adrian Hoffmann · Claudio Fanconi · Rahul Rade · Jonas Kohler -
2021 : How Not to Measure Disentanglement »
· Julia Kiseleva · Maarten de Rijke -
2021 : Towards the Unification and Robustness of Perturbation and Gradient Based Explanations »
· Sushant Agarwal · Shahin Jabbari · Chirag Agarwal · Sohini Upadhyay · Steven Wu · Hima Lakkaraju -
2021 : Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout »
· Pengfei Xie -
2021 : Interpretable Face Manipulation Detection via Feature Whitening »
· Yingying Hua · Pengju Wang · Shiming Ge -
2021 : Synthetic Benchmarks for Scientific Research in Explainable Machine Learning »
· Yang Liu · Colin White · Willie Neiswanger -
2021 : A Probabilistic Representation of DNNs: Bridging Mutual Information and Generalization »
· Xinjie Lan -
2021 : A MaxSAT Approach to Inferring Explainable Temporal Properties »
· Rajarshi Roy · Zhe Xu · Ufuk Topcu · Jean-Raphaël Gaglione -
2021 : Active Automaton Inference for Reinforcement Learning using Queries and Counterexamples »
· Aditya Ojha · Zhe Xu · Ufuk Topcu -
2021 : Learned Interpretable Residual Extragradient ISTA for Sparse Coding »
· Connie Kong · Fanhua Shang -
2021 : Neural Network Classifier as Mutual Information Evaluator »
· Zhenyue Qin -
2021 : Evaluation of Saliency-based Explainability Methods »
· Sam Zabdiel Samuel · Vidhya Kamakshi · Narayanan Chatapuram Krishnan -
2021 : Order in the Court: Explainable AI Methods Prone to Disagreement »
· Michael Neely · Stefan F. Schouten · Ana Lucic -
2021 : On the overlooked issue of defining explanation objectives for local-surrogate explainers »
· Rafael Poyiadzi · Xavier Renard · Thibault Laugel · Raul Santos-Rodriguez · Marcin Detyniecki -
2021 : How Well do Feature Visualizations Support Causal Understanding of CNN Activations? »
· Roland S. Zimmermann · Judith Borowski · Robert Geirhos · Matthias Bethge · Thomas SA Wallis · Wieland Brendel -
2021 : On the Connections between Counterfactual Explanations and Adversarial Examples »
· Martin Pawelczyk · Shalmali Joshi · Chirag Agarwal · Sohini Upadhyay · Hima Lakkaraju -
2021 : Promises and Pitfalls of Black-Box Concept Learning Models »
· Anita Mahinpei · Justin Clark · Isaac Lage · Finale Doshi-Velez · Weiwei Pan -
2021 : On the (Un-)Avoidability of Adversarial Examples »
· Ruth Urner -
2021 : Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations »
· Chirag Agarwal · Marinka Zitnik · Hima Lakkaraju -
2021 : Reliable graph neural network explanations through adversarial training »
· Donald Loveland · Bhavya Kailkhura · T. Yong-Jin Han -
2021 : Towards Fully Interpretable Deep Neural Networks: Are We There Yet? »
· Sandareka Wickramanayake -
2021 : Towards Automated Evaluation of Explanations in Graph Neural Networks »
· Balaji Ganesan · Devbrat Sharma -
2021 : A Source-Criticism Debiasing Method for GloVe Embeddings »
-
2021 : Quantitative Evaluation of Explainable Graph Neural Networks for Molecular Property prediction »
· Jiahua Rao · SHUANGJIA ZHENG -
2021 : What will it take to generate fairness-preserving explanations? »
· Jessica Dai · Sohini Upadhyay · Hima Lakkaraju -
2021 : Gradient-Based Interpretability Methods and Binarized Neural Networks »
· Amy Widdicombe -
2021 : Meaningfully Explaining a Model's Mistakes »
· Abubakar Abid · James Zou -
2021 : Feature Attributions and Counterfactual Explanations Can Be Manipulated »
· Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju -
2021 : SalKG: Learning From Knowledge Graph Explanations for Commonsense Reasoning »
· Aaron Chan · Xiang Ren -
2021 : Re-imagining GNN Explanations with ideas from Tabular Data »
· Anjali Singh · Shamanth Nayak K · Balaji Ganesan -
2021 : Learning Sparse Representations with Alternating Back-Propagation »
· Tian Han -
2021 : Deep Interpretable Criminal Charge Prediction Based on Temporal Trajectory »
· Jia Xu · Abdul Khan -
2021 : Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates »
Dan Ley · Umang Bhatt · Adrian Weller -
2021 : On the Fairness of Causal Algorithmic Recourse »
Julius von Kügelgen · Amir-Hossein Karimi · Umang Bhatt · Isabel Valera · Adrian Weller · Bernhard Schölkopf · Amir-Hossein Karimi -
2021 : Towards Principled Disentanglement for Domain Generalization »
Hanlin Zhang · Yi-Fan Zhang · Weiyang Liu · Adrian Weller · Bernhard Schölkopf · Eric Xing -
2021 : Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates »
Dan Ley · Umang Bhatt · Adrian Weller -
2021 : CrossWalk: Fairness-enhanced Node Representation Learning »
Ahmad Khajehnejad · Moein Khajehnejad · Krishna Gummadi · Adrian Weller · Baharan Mirzasoleiman -
2022 : Perspectives on Incorporating Expert Feedback into Model Updates »
Umang Bhatt -
2022 : Perspectives on Incorporating Expert Feedback into Model Updates »
Valerie Chen · Umang Bhatt · Hoda Heidari · Adrian Weller · Ameet Talwalkar -
2023 : Algorithms for Optimal Adaptation of Diffusion Models to Reward Functions »
Krishnamurthy Dvijotham · Shayegan Omidshafiei · Kimin Lee · Katie Collins · Deepak Ramachandran · Adrian Weller · Mohammad Ghavamzadeh · Milad Nasresfahani · Ying Fan · Jeremiah Liu -
2023 : The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling probabilistic social inferences from linguistic inputs »
Lance Ying · Katie Collins · Megan Wei · Cedegao Zhang · Tan Zhi-Xuan · Adrian Weller · Josh Tenenbaum · Catherine Wong -
2023 Oral: Simplex Random Features »
Isaac Reid · Krzysztof Choromanski · Valerii Likhosherstov · Adrian Weller -
2023 Poster: Efficient Graph Field Integrators Meet Point Clouds »
Krzysztof Choromanski · Arijit Sehanobish · Han Lin · YUNFAN ZHAO · Eli Berger · Tetiana Parshakova · Qingkai Pan · David Watkins · Tianyi Zhang · Valerii Likhosherstov · Somnath Basu Roy Chowdhury · Kumar Avinava Dubey · Deepali Jain · Tamas Sarlos · Snigdha Chaturvedi · Adrian Weller -
2023 Poster: Simplex Random Features »
Isaac Reid · Krzysztof Choromanski · Valerii Likhosherstov · Adrian Weller -
2023 Poster: Is Learning Summary Statistics Necessary for Likelihood-free Inference? »
Yanzhi Chen · Michael Gutmann · Adrian Weller -
2022 : Spotlight Presentations »
Adrian Weller · Osbert Bastani · Jake Snell · Tal Schuster · Stephen Bates · Zhendong Wang · Margaux Zaffran · Danielle Rasooly · Varun Babbar -
2022 Workshop: Workshop on Human-Machine Collaboration and Teaming »
Umang Bhatt · Katie Collins · Maria De-Arteaga · Bradley Love · Adrian Weller -
2022 Poster: From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers »
Krzysztof Choromanski · Han Lin · Haoxian Chen · Tianyi Zhang · Arijit Sehanobish · Valerii Likhosherstov · Jack Parker-Holder · Tamas Sarlos · Adrian Weller · Thomas Weingarten -
2022 Poster: Measuring Representational Robustness of Neural Networks Through Shared Invariances »
Vedant Nanda · Till Speicher · Camila Kolling · John P Dickerson · Krishna Gummadi · Adrian Weller -
2022 Oral: Measuring Representational Robustness of Neural Networks Through Shared Invariances »
Vedant Nanda · Till Speicher · Camila Kolling · John P Dickerson · Krishna Gummadi · Adrian Weller -
2022 Spotlight: From block-Toeplitz matrices to differential equations on graphs: towards a general theory for scalable masked Transformers »
Krzysztof Choromanski · Han Lin · Haoxian Chen · Tianyi Zhang · Arijit Sehanobish · Valerii Likhosherstov · Jack Parker-Holder · Tamas Sarlos · Adrian Weller · Thomas Weingarten -
2021 Poster: Debiasing a First-order Heuristic for Approximate Bi-level Optimization »
Valerii Likhosherstov · Xingyou Song · Krzysztof Choromanski · Jared Quincy Davis · Adrian Weller -
2021 Spotlight: Debiasing a First-order Heuristic for Approximate Bi-level Optimization »
Valerii Likhosherstov · Xingyou Song · Krzysztof Choromanski · Jared Quincy Davis · Adrian Weller -
2020 Workshop: 5th ICML Workshop on Human Interpretability in Machine Learning (WHI) »
Adrian Weller · Alice Xiang · Amit Dhurandhar · Been Kim · Dennis Wei · Kush Varshney · Umang Bhatt -
2020 Poster: Stochastic Flows and Geometric Optimization on the Orthogonal Group »
Krzysztof Choromanski · David Cheikhi · Jared Quincy Davis · Valerii Likhosherstov · Achille Nazaret · Achraf Bahamou · Xingyou Song · Mrugank Akarte · Jack Parker-Holder · Jacob Bergquist · Yuan Gao · Aldo Pacchiano · Tamas Sarlos · Adrian Weller · Vikas Sindhwani -
2019 Workshop: Human In the Loop Learning (HILL) »
Xin Wang · Xin Wang · Fisher Yu · Shanghang Zhang · Joseph Gonzalez · Yangqing Jia · Sarah Bird · Kush Varshney · Been Kim · Adrian Weller -
2019 Poster: Unifying Orthogonal Monte Carlo Methods »
Krzysztof Choromanski · Mark Rowland · Wenyu Chen · Adrian Weller -
2019 Oral: Unifying Orthogonal Monte Carlo Methods »
Krzysztof Choromanski · Mark Rowland · Wenyu Chen · Adrian Weller -
2019 Poster: TibGM: A Transferable and Information-Based Graphical Model Approach for Reinforcement Learning »
Tameem Adel · Adrian Weller -
2019 Oral: TibGM: A Transferable and Information-Based Graphical Model Approach for Reinforcement Learning »
Tameem Adel · Adrian Weller -
2018 Poster: Blind Justice: Fairness with Encrypted Sensitive Attributes »
Niki Kilbertus · Adria Gascon · Matt Kusner · Michael Veale · Krishna Gummadi · Adrian Weller -
2018 Oral: Blind Justice: Fairness with Encrypted Sensitive Attributes »
Niki Kilbertus · Adria Gascon · Matt Kusner · Michael Veale · Krishna Gummadi · Adrian Weller -
2018 Poster: Bucket Renormalization for Approximate Inference »
Sungsoo Ahn · Michael Chertkov · Adrian Weller · Jinwoo Shin -
2018 Oral: Bucket Renormalization for Approximate Inference »
Sungsoo Ahn · Michael Chertkov · Adrian Weller · Jinwoo Shin -
2018 Poster: Structured Evolution with Compact Architectures for Scalable Policy Optimization »
Krzysztof Choromanski · Mark Rowland · Vikas Sindhwani · Richard E Turner · Adrian Weller -
2018 Poster: Discovering Interpretable Representations for Both Deep Generative and Discriminative Models »
Tameem Adel · Zoubin Ghahramani · Adrian Weller -
2018 Oral: Discovering Interpretable Representations for Both Deep Generative and Discriminative Models »
Tameem Adel · Zoubin Ghahramani · Adrian Weller -
2018 Oral: Structured Evolution with Compact Architectures for Scalable Policy Optimization »
Krzysztof Choromanski · Mark Rowland · Vikas Sindhwani · Richard E Turner · Adrian Weller -
2017 Workshop: Reliable Machine Learning in the Wild »
Dylan Hadfield-Menell · Jacob Steinhardt · Adrian Weller · Smitha Milli -
2017 : A. Weller, "Challenges for Transparency" »
Adrian Weller -
2017 Workshop: Workshop on Human Interpretability in Machine Learning (WHI) »
Kush Varshney · Adrian Weller · Been Kim · Dmitry Malioutov -
2017 Poster: Lost Relatives of the Gumbel Trick »
Matej Balog · Nilesh Tripuraneni · Zoubin Ghahramani · Adrian Weller -
2017 Talk: Lost Relatives of the Gumbel Trick »
Matej Balog · Nilesh Tripuraneni · Zoubin Ghahramani · Adrian Weller