Timezone: »

Fair Selective Classification Via Sufficiency
Joshua Lee · Yuheng Bu · Deepta Rajan · Prasanna Sattigeri · Rameswar Panda · Subhro Das · Gregory Wornell

Thu Jul 22 05:00 AM -- 05:20 AM (PDT) @

Selective classification is a powerful tool for decision-making in scenarios where mistakes are costly but abstentions are allowed. In general, by allowing a classifier to abstain, one can improve the performance of a model at the cost of reducing coverage and classifying fewer samples. However, recent work has shown, in some cases, that selective classification can magnify disparities between groups, and has illustrated this phenomenon on multiple real-world datasets. We prove that the sufficiency criterion can be used to mitigate these disparities by ensuring that selective classification increases performance on all groups, and introduce a method for mitigating the disparity in precision across the entire coverage scale based on this criterion. We then provide an upper bound on the conditional mutual information between the class label and sensitive attribute, conditioned on the learned features, which can be used as a regularizer to achieve fairer selective classification. The effectiveness of the method is demonstrated on the Adult, CelebA, Civil Comments, and CheXpert datasets.

Author Information

Joshua Lee (Massachusetts Institute of Technology)
Yuheng Bu (MIT)
Yuheng Bu

I am an Assistant Professor with the Department of Electrical & Computer Engineering (ECE) at the University of Florida. Before joining the University of Florida, I was a postdoctoral research associate at the Research Laboratory of Electronics and Institute for Data, Systems, and Society (IDSS), Massachusetts Institute of Technology (MIT). I received my Ph.D. degree at the Coordinated Science Laboratory and the Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign (UIUC) in 2019. Before that, I received a B.S. degree (with honors) in Electronic Engineering from Tsinghua University in 2014.

Deepta Rajan (IBM Research)
Prasanna Sattigeri (IBM Research)
Rameswar Panda (MIT-IBM Watson AI Lab, IBM Research)
Subhro Das (MIT-IBM Watson AI Lab, IBM Research)

Subhro Das is a Research Staff Member and Manager at the MIT-IBM AI Lab, IBM Research, Cambridge MA. As a Principal Investigator (PI), he works on developing novel AI algorithms in collaboration with MIT. He is a Research Affiliate at MIT, co-leading IBM's engagement in the MIT Quest for Intelligence. He serves as the Chair of the AI Learning Professional Interest Community (PIC) at IBM Research. His research interests are broadly in the areas of Trustworthy ML, Reinforcement Learning and ML Optimization. At the MIT-IBM AI Lab, he works on developing novel AI algorithms for uncertainty quantification and human-centric AI systems; robust, accelerated, online & distributed optimization; and, safe, unstable & multi-agent reinforcement learning. He led the Future of Work initiative within IBM Research, studying the impact of AI on the labor market and developing AI-driven recommendation frameworks for skills and talent management. Previously, at the IBM T.J. Watson Research Center in New York, he worked on developing signal processing and machine learning based predictive algorithms for a broad variety of biomedical and healthcare applications. He received MS and PhD degrees in Electrical and Computer Engineering from Carnegie Mellon University in 2014 and 2016, respectively, and Bachelors (B.Tech.) degree in Electronics & Electrical Communication Engineering from Indian Institute of Technology Kharagpur in 2011.

Gregory Wornell (MIT)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors