Timezone: »
The unfairness of a regressor is evaluated by measuring the correlation between the estimator and the sensitive attribute (e.g., race, gender, age), and the coefficient of determination (CoD) is a natural extension of the correlation coefficient when more than one sensitive attribute exists. As is well known, there is a trade-off between fairness and accuracy of a regressor, which implies a perfectly fair optimizer does not always yield a useful prediction. Taking this into consideration, we optimize the accuracy of the estimation subject to a user-defined level of fairness. However, a fairness level as a constraint induces a nonconvexity of the feasible region, which disables the use of an off-the-shelf convex optimizer. Despite such nonconvexity, we show an exact solution is available by using tools of global optimization theory. Furthermore, we propose a nonlinear extension of the method by kernel representation. Unlike most of existing fairness-aware machine learning methods, our method allows us to deal with numeric and multiple sensitive attributes.
Author Information
Junpei Komiyama (U-Tokyo)
Akiko Takeda (The Institute of Statistical Mathematics)
Junya Honda (University of Tokyo / RIKEN)
Hajime Shimao (Purdue University)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Nonconvex Optimization for Regression with Fairness Constraints »
Wed. Jul 11th 04:15 -- 07:00 PM Room Hall B #79
More from the Same Authors
-
2020 Poster: Online Dense Subgraph Discovery via Blurred-Graph Feedback »
Yuko Kuroki · Atsushi Miyauchi · Junya Honda · Masashi Sugiyama