Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning

Stabilizing GNN for Fairness via Lipschitz Bounds

Yaning Jia · Chunhui Zhang

Keywords: [ Model Stability ] [ Graph Neural Networks ] [ Lipschitz Bound ]


Abstract:

The Lipschitz bound, a technique from robust statistics, limits the maximum changes in output with respect to the input, considering associated irrelevant biased factors. It provides an efficient and provable method for examining the output stability of machine learning models without incurring additional computation costs. However, there has been no previous research investigating the Lipschitz bounds for Graph Neural Networks (GNNs), especially in the context of non-Euclidean data with inherent biases. This poses a challenge for constraining GNN output perturbations induced by input biases and ensuring fairness during training. This paper addresses this gap by formulating a Lipschitz bound for GNNs operating on attributed graphs, and analyzing how the Lipschitz constant can constrain output perturbations induced by biases for fairness training. The effectiveness of the Lipschitz bound is experimentally validated in limiting model output biases. Additionally, from a training dynamics perspective, we demonstrate how the theoretical Lipschitz bound can effectively guide GNN training to balance accuracy and fairness.

Chat is not available.