Timezone: »

Last-Layer Fairness Fine-tuning is Simple and Effective for Neural Networks
Yuzhen Mao · Zhun Deng · Huaxiu Yao · Ting Ye · Kenji Kawaguchi · James Zou
Event URL: https://openreview.net/forum?id=wrmynnlrTI »

As machine learning has been deployed ubiquitously across applications in modern data science, algorithmic fairness has become a great concern. Among them, imposing fairness constraints during learning, i.e. in-processing fair training, has been a popular type of training method because they don't require accessing sensitive attributes during test time in contrast to post-processing methods. While this has been extensively studied in classical machine learning models, their impact on deep neural networks remains unclear. Recent research has shown that adding fairness constraints to the objective function leads to severe over-fitting to fairness criteria in large models, and how to solve this challenge is an important open question. To tackle this, we leverage the wisdom and power of pre-training and fine-tuning and develop a simple but novel framework to train fair neural networks in an efficient and inexpensive way --- last-layer fine-tuning alone can effectively promote fairness in deep neural networks. This framework offers valuable insights into representation learning for training fair neural networks.

Author Information

Yuzhen Mao (Simon Fraser University)
Zhun Deng (Columbia University)
Huaxiu Yao (Stanford University)
Ting Ye (University of Washington)
Kenji Kawaguchi (NUS)
James Zou (Stanford)

More from the Same Authors