FairGB: A Fair Granular-Ball Generation Method for Data Classification
Qifen Yang ⋅ Yuhui Deng ⋅ Jiande Huang ⋅ Peng Zhou ⋅ Xiwen Lu ⋅ Lin Cui
Abstract
With the widespread application of data-driven classifiers in high-risk domains, group fairness has increasingly become a key research focus. However, most existing methods rely on model constraints or data reweighting, which often suffer from limited interpretability and may distort the original data distribution. Granular-ball computing (GBC), as a structured and highly interpretable learning framework, provides a natural foundation for incorporating group fairness into the data partitioning process. Building on this insight, we first propose a $\textbf{Fair}$ $\textbf{G}$ranular-$\textbf{B}$all $\textbf{G}$eneration framework (FairGBG), which employs the fair clustering algorithm to ensure a balanced proportion of sensitive groups within each granular-ball (GB) during its construction, aiming to enhance within-ball group fairness. Theoretical analysis shows that FairGBG preserves high purity within each GB while satisfying group fairness. Furthermore, we introduce a $\textbf{Fair} \textbf{G}$ranular-$\textbf{B}$all-based $\textbf{F}$air data $\textbf{C}$lassification method (FairGBFC), which enhances classification fairness by leveraging group fairness within GBs. Experimental results on multiple benchmark datasets demonstrate that, compared to existing methods, FairGBFC significantly improves classification fairness while maintaining competitive accuracy. Notably, FairGBFC exhibits superior classification performance compared to standard GB-based methods across all benchmark datasets. Furthermore, compared with state-of-the-art fairness-aware baselines, it achieves a superior trade-off between accuracy and fairness, effectively mitigating bias while preserving high utility.
Successful Page Load