The Fairness Hierarchy: A viewpoint from causal inference
Chengbo Zhang ⋅ Zhen Yao ⋅ Hao Pang ⋅ Changcheng Li
Abstract
Fairness in machine learning prediction has attracted growing attention in recent years. In this article, we propose a causal–inference–based framework for fair prediction, defined through path-specific counterfactual interventions. Instead of imposing fairness via constraints on predictive objectives or model parameters, our approach specifies fairness directly at the level of counterfactual prediction semantics. Given a learned causal graph, we construct a predictive distribution for the outcome $Y$ using a structural causal model and generate counterfactual predictions by selectively intervening on causal paths emanating from sensitive attributes. By allowing or blocking the propagation of sensitive information along designated paths, possibly involving multiple sensitive sources, our framework induces a hierarchy of interpretable fairness notions, generalizing standard path-specific causal semantics. Our empirical experiments demonstrate how different fairness levels can be instantiated and compared in practice.
Successful Page Load