Causal Disentangled Anchor Learning for Scalable Fair Multi-view Clustering
Suyuan Liu ⋅ Shengfei Wei ⋅ Wenjing Yang ⋅ Shengju Yu ⋅ Siwei Wang ⋅ Xueqiong Li ⋅ Wenpeng Lu ⋅ Xinwang Liu
Abstract
Existing fair multi-view clustering methods typically suffer from a severe trade-off between clustering utility and fairness, while incurring prohibitive quadratic complexity on large-scale datasets. To address these challenges, we propose Causal Disentangled Anchor Learning (CDAL), a novel framework that achieves scalable fairness via structural disentanglement. Guided by a structural causal model perspective, CDAL utilizes a dual-anchor mechanism to structurally separate latent representations into orthogonal semantic and sensitive subspaces. We further ensure statistical independence through a linearized Hilbert-Schmidt Independence Criterion (HSIC) constraint, which is optimized via an efficient alternating scheme. Theoretically, we prove the identifiability of the disentangled factors and guarantee the algorithm's global convergence and linear time complexity $\mathcal{O}(n)$. Extensive experiments on large-scale benchmarks demonstrate that CDAL outperforms state-of-the-art methods, achieving a superior utility-fairness trade-off.
Successful Page Load