Timezone: »
Selective regression allows abstention from prediction if the confidence to make an accurate prediction is not sufficient. In general, by allowing a reject option, one expects the performance of a regression model to increase at the cost of reducing coverage (i.e., by predicting on fewer samples). However, as we show, in some cases, the performance of a minority subgroup can decrease while we reduce the coverage, and thus selective regression can magnify disparities between different sensitive subgroups. Motivated by these disparities, we propose new fairness criteria for selective regression requiring the performance of every subgroup to improve with a decrease in coverage. We prove that if a feature representation satisfies the \textit{sufficiency} criterion or is \textit{calibrated for mean and variance}, then the proposed fairness criteria is met. Further, we introduce two approaches to mitigate the performance disparity across subgroups: (a) by regularizing an upper bound of conditional mutual information under a Gaussian assumption and (b) by regularizing a contrastive loss for conditional mean and conditional variance prediction. The effectiveness of these approaches is demonstrated on synthetic and real-world datasets.
Author Information
Abhin Shah (MIT)
Yuheng Bu (MIT)
I am an Assistant Professor with the Department of Electrical & Computer Engineering (ECE) at the University of Florida. Before joining the University of Florida, I was a postdoctoral research associate at the Research Laboratory of Electronics and Institute for Data, Systems, and Society (IDSS), Massachusetts Institute of Technology (MIT). I received my Ph.D. degree at the Coordinated Science Laboratory and the Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign (UIUC) in 2019. Before that, I received a B.S. degree (with honors) in Electronic Engineering from Tsinghua University in 2014.
Joshua Lee (Massachusetts Institute of Technology)
Subhro Das (MIT-IBM Watson AI Lab, IBM Research)
Subhro Das is a Research Staff Member and Manager at the MIT-IBM AI Lab, IBM Research, Cambridge MA. As a Principal Investigator (PI), he works on developing novel AI algorithms in collaboration with MIT. He is a Research Affiliate at MIT, co-leading IBM's engagement in the MIT Quest for Intelligence. He serves as the Chair of the AI Learning Professional Interest Community (PIC) at IBM Research. His research interests are broadly in the areas of Trustworthy ML, Reinforcement Learning and ML Optimization. At the MIT-IBM AI Lab, he works on developing novel AI algorithms for uncertainty quantification and human-centric AI systems; robust, accelerated, online & distributed optimization; and, safe, unstable & multi-agent reinforcement learning. He led the Future of Work initiative within IBM Research, studying the impact of AI on the labor market and developing AI-driven recommendation frameworks for skills and talent management. Previously, at the IBM T.J. Watson Research Center in New York, he worked on developing signal processing and machine learning based predictive algorithms for a broad variety of biomedical and healthcare applications. He received MS and PhD degrees in Electrical and Computer Engineering from Carnegie Mellon University in 2014 and 2016, respectively, and Bachelors (B.Tech.) degree in Electronics & Electrical Communication Engineering from Indian Institute of Technology Kharagpur in 2011.
Rameswar Panda (MIT-IBM Watson AI Lab, IBM Research)
Prasanna Sattigeri (IBM Research)
Gregory Wornell (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Selective Regression under Fairness Criteria »
Thu. Jul 21st 06:45 -- 06:50 PM Room Room 307
More from the Same Authors
-
2021 : Characterizing the Generalization Error of Gibbs Algorithm with Symmetrized KL information »
Gholamali Aminian · Yuheng Bu · Laura Toni · Miguel Rodrigues · Gregory Wornell -
2022 : Fast Convergence for Unstable Reinforcement Learning Problems by Logarithmic Mapping »
Wang Zhang · Lam Nguyen · Subhro Das · Alexandre Megretsky · Luca Daniel · Tsui-Wei Weng -
2023 Poster: On Balancing Bias and Variance in Unsupervised Multi-Source-Free Domain Adaptation »
Maohao Shen · Yuheng Bu · Gregory Wornell -
2023 Poster: ConCerNet: A Contrastive Learning Based Framework for Automated Conservation Law Discovery and Trustworthy Dynamical System Prediction »
Wang Zhang · Lily Weng · Subhro Das · Alexandre Megretsky · Luca Daniel · Lam Nguyen -
2022 Poster: Beyond Worst-Case Analysis in Stochastic Approximation: Moment Estimation Improves Instance Complexity »
Jingzhao Zhang · Hongzhou Lin · Subhro Das · Suvrit Sra · Ali Jadbabaie -
2022 Spotlight: Beyond Worst-Case Analysis in Stochastic Approximation: Moment Estimation Improves Instance Complexity »
Jingzhao Zhang · Hongzhou Lin · Subhro Das · Suvrit Sra · Ali Jadbabaie -
2022 Poster: On Convergence of Gradient Descent Ascent: A Tight Local Analysis »
Haochuan Li · Farzan Farnia · Subhro Das · Ali Jadbabaie -
2022 Spotlight: On Convergence of Gradient Descent Ascent: A Tight Local Analysis »
Haochuan Li · Farzan Farnia · Subhro Das · Ali Jadbabaie -
2021 Poster: Fair Selective Classification Via Sufficiency »
Joshua Lee · Yuheng Bu · Deepta Rajan · Prasanna Sattigeri · Rameswar Panda · Subhro Das · Gregory Wornell -
2021 Oral: Fair Selective Classification Via Sufficiency »
Joshua Lee · Yuheng Bu · Deepta Rajan · Prasanna Sattigeri · Rameswar Panda · Subhro Das · Gregory Wornell