Skip to yearly menu bar Skip to main content


Poster

FDGen: A Fairness-Aware Graph Generation Model

Zichong Wang · Wenbin Zhang

East Exhibition Hall A-B #E-1003
[ ] [ ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Graph generation models have shown significant potential across various domains. However, despite their success, these models often inherit societal biases, limiting their adoption in real-world applications. Existing research on fairness in graph generation primarily addresses structural bias, overlooking the critical issue of feature bias. To address this gap, we propose FDGen, a novel approach that defines and mitigates both feature and structural biases in graph generation models. Furthermore, we provide a theoretical analysis of how bias sources in graph data contribute to disparities in graph generation tasks. Experimental results on four real-world datasets demonstrate that FDGen outperforms state-of-the-art methods, achieving notable improvements in fairness while maintaining competitive generation performance.

Lay Summary:

We often use computer models to generate graphs, which are structures that show how different items or entities connect to each other (for example, friendships in a social network). Unfortunately, these models can pick up hidden biases from the data, leading to unfairness. Our work introduces a new method called FDGen that addresses two types of bias in these models: feature bias (where certain attributes of the nodes are treated unfairly) and structural bias (where connections are unfairly formed). We also explain how different sources of bias in the data can lead to these problems. Our experiments on real-world datasets show that FDGen makes the generated graphs fairer while still producing high-quality results.

Live content is unavailable. Log in and register to view live content