Timezone: »
Spotlight
Active Sampling for Min-Max Fairness
Jacob Abernethy · Pranjal Awasthi · Matthäus Kleindessner · Jamie Morgenstern · Chris Russell · Jie Zhang
We propose simple active sampling and reweighting strategies for optimizing min-max fairness that can be applied to any classification or regression model learned via loss minimization. The key intuition behind our approach is to use at each timestep a datapoint from the group that is worst off under the current model for updating the model. The ease of implementation and the generality of our robust formulation make it an attractive option for improving model performance on disadvantaged groups. For convex learning problems, such as linear or logistic regression, we provide a fine-grained analysis, proving the rate of convergence to a min-max fair solution.
Author Information
Jacob Abernethy (Georgia Institute of Technology)
Pranjal Awasthi (Google)
Matthäus Kleindessner (Amazon)
Jamie Morgenstern (U Washington)
Chris Russell (Amazon)
Jie Zhang (University of Washington)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Active Sampling for Min-Max Fairness »
Tue. Jul 19th through Wed the 20th Room Hall E #637
More from the Same Authors
-
2020 : Contributed Talk: Bridging Truthfulness and Corruption-Robustness in Multi-Armed Bandit Mechanisms »
Jacob Abernethy · Bhuvesh Kumar · Thodoris Lykouris · Yinglun Xu -
2021 : How does Over-Parametrization Lead to Acceleration for Learning a Single Teacher Neuron with Quadratic Activation? »
Jun-Kun Wang · Jacob Abernethy -
2022 : ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2023 : Randomized Quantization is All You Need for Differential Privacy in Federated Learning »
Yeojoon Youn · Zihao Hu · Juba Ziani · Jacob Abernethy -
2023 Poster: When do Minimax-fair Learning and Empirical Risk Minimization Coincide? »
Harvineet Singh · Matthäus Kleindessner · Volkan Cevher · Rumi Chunara · Chris Russell -
2022 : ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2022 Workshop: Principles of Distribution Shift (PODS) »
Elan Rosenfeld · Saurabh Garg · Shibani Santurkar · Jamie Morgenstern · Hossein Mobahi · Zachary Lipton · Andrej Risteski -
2022 Poster: Score Matching Enables Causal Discovery of Nonlinear Additive Noise Models »
Paul Rolland · Volkan Cevher · Matthäus Kleindessner · Chris Russell · Dominik Janzing · Bernhard Schölkopf · Francesco Locatello -
2022 Poster: Do More Negative Samples Necessarily Hurt In Contrastive Learning? »
Pranjal Awasthi · Nishanth Dikkala · Pritish Kamath -
2022 Oral: Do More Negative Samples Necessarily Hurt In Contrastive Learning? »
Pranjal Awasthi · Nishanth Dikkala · Pritish Kamath -
2022 Oral: Score Matching Enables Causal Discovery of Nonlinear Additive Noise Models »
Paul Rolland · Volkan Cevher · Matthäus Kleindessner · Chris Russell · Dominik Janzing · Bernhard Schölkopf · Francesco Locatello -
2022 Poster: Congested Bandits: Optimal Routing via Short-term Resets »
Pranjal Awasthi · Kush Bhatia · Sreenivas Gollapudi · Kostas Kollias -
2022 Poster: Agnostic Learnability of Halfspaces via Logistic Loss »
Ziwei Ji · Kwangjun Ahn · Pranjal Awasthi · Satyen Kale · Stefani Karp -
2022 Oral: Agnostic Learnability of Halfspaces via Logistic Loss »
Ziwei Ji · Kwangjun Ahn · Pranjal Awasthi · Satyen Kale · Stefani Karp -
2022 Spotlight: Congested Bandits: Optimal Routing via Short-term Resets »
Pranjal Awasthi · Kush Bhatia · Sreenivas Gollapudi · Kostas Kollias -
2022 Poster: H-Consistency Bounds for Surrogate Loss Minimizers »
Pranjal Awasthi · Anqi Mao · Mehryar Mohri · Yutao Zhong -
2022 Poster: ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2022 Poster: Individual Preference Stability for Clustering »
Saba Ahmadi · Pranjal Awasthi · Samir Khuller · Matthäus Kleindessner · Jamie Morgenstern · Pattara Sukprasert · Ali Vakilian -
2022 Oral: H-Consistency Bounds for Surrogate Loss Minimizers »
Pranjal Awasthi · Anqi Mao · Mehryar Mohri · Yutao Zhong -
2022 Spotlight: ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2022 Oral: Individual Preference Stability for Clustering »
Saba Ahmadi · Pranjal Awasthi · Samir Khuller · Matthäus Kleindessner · Jamie Morgenstern · Pattara Sukprasert · Ali Vakilian -
2021 Poster: A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network »
Jun-Kun Wang · Chi-Heng Lin · Jacob Abernethy -
2021 Spotlight: A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network »
Jun-Kun Wang · Chi-Heng Lin · Jacob Abernethy -
2019 Poster: Competing Against Nash Equilibria in Adversarially Changing Zero-Sum Games »
Adrian Rivera Cardoso · Jacob Abernethy · He Wang · Huan Xu -
2019 Poster: Making Decisions that Reduce Discriminatory Impacts »
Matt J. Kusner · Chris Russell · Joshua Loftus · Ricardo Silva -
2019 Poster: Fair k-Center Clustering for Data Summarization »
Matthäus Kleindessner · Pranjal Awasthi · Jamie Morgenstern -
2019 Poster: Guarantees for Spectral Clustering with Fairness Constraints »
Matthäus Kleindessner · Samira Samadi · Pranjal Awasthi · Jamie Morgenstern -
2019 Oral: Making Decisions that Reduce Discriminatory Impacts »
Matt J. Kusner · Chris Russell · Joshua Loftus · Ricardo Silva -
2019 Oral: Guarantees for Spectral Clustering with Fairness Constraints »
Matthäus Kleindessner · Samira Samadi · Pranjal Awasthi · Jamie Morgenstern -
2019 Oral: Fair k-Center Clustering for Data Summarization »
Matthäus Kleindessner · Pranjal Awasthi · Jamie Morgenstern -
2019 Oral: Competing Against Nash Equilibria in Adversarially Changing Zero-Sum Games »
Adrian Rivera Cardoso · Jacob Abernethy · He Wang · Huan Xu -
2018 Poster: Crowdsourcing with Arbitrary Adversaries »
Matthäus Kleindessner · Pranjal Awasthi -
2018 Oral: Crowdsourcing with Arbitrary Adversaries »
Matthäus Kleindessner · Pranjal Awasthi