Timezone: »
As the representations output by Graph Neural Networks (GNNs) are increasingly employed in real-world applications, it becomes important to ensure that these representations are fair and stable. In this work, we establish a key connection between fairness and stability and leverage it to propose a novel framework, NIFTY (uNIfying Fairness and stabiliTY), which can be used with any GNN to learn fair and stable representations. We introduce an objective function that simultaneously accounts for fairness and stability and proposes layer-wise weight normalization of GNNs using the Lipschitz constant. Further, we theoretically show that our layer-wise weight normalization promotes fairness and stability in the resulting representations. We introduce three new graph datasets comprising of high-stakes decisions in criminal justice and financial lending domains. Extensive experimentation with the above datasets demonstrates the efficacy of our framework.
Author Information
Chirag Agarwal (Harvard University)
Hima Lakkaraju (Harvard)
Marinka Zitnik (Harvard University)
More from the Same Authors
-
2021 : Towards the Unification and Robustness of Perturbation and Gradient Based Explanations »
· Sushant Agarwal · Shahin Jabbari · Chirag Agarwal · Sohini Upadhyay · Steven Wu · Hima Lakkaraju -
2021 : On the Connections between Counterfactual Explanations and Adversarial Examples »
· Martin Pawelczyk · Shalmali Joshi · Chirag Agarwal · Sohini Upadhyay · Hima Lakkaraju -
2021 : Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations »
· Chirag Agarwal · Marinka Zitnik · Hima Lakkaraju -
2021 : What will it take to generate fairness-preserving explanations? »
· Jessica Dai · Sohini Upadhyay · Hima Lakkaraju -
2021 : Feature Attributions and Counterfactual Explanations Can Be Manipulated »
· Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju -
2021 : On the Connections between Counterfactual Explanations and Adversarial Examples »
Martin Pawelczyk · Shalmali Joshi · Chirag Agarwal · Sohini Upadhyay · Hima Lakkaraju -
2021 : Towards Robust and Reliable Algorithmic Recourse »
Sohini Upadhyay · Shalmali Joshi · Hima Lakkaraju -
2021 : Enhancing interpretability and reducing uncertainties in deep learning of electrocardiograms using a sub-waveform representation »
Hossein Honarvar · Chirag Agarwal · Sulaiman Somani · Girish Nadkarni · Marinka Zitnik · Fei Wang · Benjamin Glicksberg -
2021 : Reliable Post hoc Explanations: Modeling Uncertainty in Explainability »
Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju -
2021 : Interactive Visual Explanations for Deep Drug Repurposing »
Qianwen Wang · Payal Chandak · Marinka Zitnik -
2021 : Reliable Post hoc Explanations: Modeling Uncertainty in Explainability »
Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju -
2021 : Interactive Visual Explanations for Deep Drug Repurposing »
Qianwen Wang · Payal Chandak · Marinka Zitnik -
2023 : Towards Fair Knowledge Distillation using Student Feedback »
Abhinav Java · Surgan Jandial · Chirag Agarwal -
2023 : Counterfactual Explanation Policies in RL »
Shripad Deshmukh · Srivatsan R · Supriti Vijay · Jayakumar Subramanian · Chirag Agarwal -
2023 : Fair Machine Unlearning: Data Removal while Mitigating Disparities »
Alex Oesterling · Jiaqi Ma · Flavio Calmon · Hima Lakkaraju -
2023 : Evaluating the Casual Reasoning Abilities of Large Language Models »
Isha Puri · Hima Lakkaraju -
2023 : Towards Fair Knowledge Distillation using Student Feedback »
Abhinav Java · Surgan Jandial · Chirag Agarwal -
2023 : Himabindu Lakkaraju - Regulating Explainable AI: Technical Challenges and Opportunities »
Hima Lakkaraju -
2023 : Efficient Estimation of Local Robustness of Machine Learning Models »
Tessa Han · Suraj Srinivas · Hima Lakkaraju -
2023 Poster: Domain Adaptation for Time Series Under Feature and Label Shifts »
Huan He · Owen Queen · Teddy Koker · Consuelo Cuevas · Theodoros Tsiligkaridis · Marinka Zitnik -
2023 Tutorial: Responsible AI for Generative AI in Practice: Lessons Learned and Open Challenges »
Krishnaram Kenthapadi · Hima Lakkaraju · Nazneen Rajani -
2022 Workshop: AI for Science »
Yuanqi Du · Tianfan Fu · Wenhao Gao · Kexin Huang · Shengchao Liu · Ziming Liu · Hanchen Wang · Connor Coley · Le Song · Linfeng Zhang · Marinka Zitnik -
2022 Workshop: New Frontiers in Adversarial Machine Learning »
Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Hima Lakkaraju · Sanmi Koyejo -
2022 Social: Trustworthy Machine Learning Social »
Haohan Wang · Sarah Tan · Chirag Agarwal · Chhavi Yadav · Jaydeep Borkar -
2021 Workshop: ICML Workshop on Algorithmic Recourse »
Stratis Tsirtsis · Amir-Hossein Karimi · Ana Lucic · Manuel Gomez-Rodriguez · Isabel Valera · Hima Lakkaraju -
2021 : Towards Robust and Reliable Model Explanations for Healthcare »
Hima Lakkaraju -
2021 Poster: Towards the Unification and Robustness of Perturbation and Gradient Based Explanations »
Sushant Agarwal · Shahin Jabbari · Chirag Agarwal · Sohini Upadhyay · Steven Wu · Hima Lakkaraju -
2021 Spotlight: Towards the Unification and Robustness of Perturbation and Gradient Based Explanations »
Sushant Agarwal · Shahin Jabbari · Chirag Agarwal · Sohini Upadhyay · Steven Wu · Hima Lakkaraju -
2020 Workshop: Graph Representation Learning and Beyond (GRL+) »
Petar Veličković · Michael M. Bronstein · Andreea Deac · Will Hamilton · Jessica Hamrick · Milad Hashemi · Stefanie Jegelka · Jure Leskovec · Renjie Liao · Federico Monti · Yizhou Sun · Kevin Swersky · Rex (Zhitao) Ying · Marinka Zitnik -
2020 Poster: Robust and Stable Black Box Explanations »
Hima Lakkaraju · Nino Arsov · Osbert Bastani