Timezone: »
When reasoning about strategic behavior in a machine learning context it is tempting to combine standard microfoundations of rational agents with the statistical decision theory underlying classification. In this work, we argue that a direct combination of these ingredients leads to brittle solution concepts of limited descriptive and prescriptive value. First, we show that rational agents with perfect information produce discontinuities in the aggregate response to a decision rule that we often do not observe empirically. Second, when any positive fraction of agents is not perfectly strategic, desirable stable points---where the classifier is optimal for the data it entails---no longer exist. Third, optimal decision rules under standard microfoundations maximize a measure of negative externality known as social burden within a broad class of assumptions about agent behavior. Recognizing these limitations we explore alternatives to standard microfoundations for binary classification. We describe desiderata that help navigate the space of possible assumptions about agent responses, and we then propose the noisy response model. Inspired by smoothed analysis and empirical observations, noisy response incorporates imperfection in the agent responses, which we show mitigates the limitations of standard microfoundations. Our model retains analytical tractability, leads to more robust insights about stable points, and imposes a lower social burden at optimality.
Author Information
Meena Jagadeesan (UC Berkeley)
Celestine Mendler-Dünner (University of California, Berkeley)
Moritz Hardt (University of California, Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Alternative Microfoundations for Strategic Classification »
Thu. Jul 22nd 04:00 -- 06:00 AM Room
More from the Same Authors
-
2021 : Inductive Bias of Multi-Channel Linear Convolutional Networks with Bounded Weight Norm »
Meena Jagadeesan · Ilya Razenshteyn · Suriya Gunasekar -
2021 : Causal Inference Struggles with Agency on Online Platforms »
Smitha Milli · Luca Belli · Moritz Hardt -
2023 Poster: Algorithmic Collective Action in Machine Learning »
Moritz Hardt · Eric Mazumdar · Celestine Mendler-Dünner · Tijana Zrnic -
2022 Poster: Regret Minimization with Performative Feedback »
Meena Jagadeesan · Tijana Zrnic · Celestine Mendler-Dünner -
2022 Spotlight: Regret Minimization with Performative Feedback »
Meena Jagadeesan · Tijana Zrnic · Celestine Mendler-Dünner -
2022 : Invited Talk: Celestine Mendler-Dunner »
Celestine Mendler-Dünner -
2020 Poster: Randomized Block-Diagonal Preconditioning for Parallel Learning »
Celestine Mendler-Dünner · Aurelien Lucchi -
2020 Poster: Performative Prediction »
Juan Perdomo · Tijana Zrnic · Celestine Mendler-Dünner · Moritz Hardt -
2020 Poster: Strategic Classification is Causal Modeling in Disguise »
John Miller · Smitha Milli · Moritz Hardt -
2020 Poster: Test-Time Training with Self-Supervision for Generalization under Distribution Shifts »
Yu Sun · Xiaolong Wang · Zhuang Liu · John Miller · Alexei Efros · Moritz Hardt -
2020 Poster: Balancing Competing Objectives with Noisy Data: Score-Based Classifiers for Welfare-Aware Machine Learning »
Esther Rolf · Max Simchowitz · Sarah Dean · Lydia T. Liu · Daniel Bjorkegren · Moritz Hardt · Joshua Blumenstock -
2019 Poster: Natural Analysts in Adaptive Data Analysis »
Tijana Zrnic · Moritz Hardt -
2019 Poster: The Implicit Fairness Criterion of Unconstrained Learning »
Lydia T. Liu · Max Simchowitz · Moritz Hardt -
2019 Oral: The Implicit Fairness Criterion of Unconstrained Learning »
Lydia T. Liu · Max Simchowitz · Moritz Hardt -
2019 Oral: Natural Analysts in Adaptive Data Analysis »
Tijana Zrnic · Moritz Hardt -
2018 Poster: Delayed Impact of Fair Machine Learning »
Lydia T. Liu · Sarah Dean · Esther Rolf · Max Simchowitz · Moritz Hardt -
2018 Oral: Delayed Impact of Fair Machine Learning »
Lydia T. Liu · Sarah Dean · Esther Rolf · Max Simchowitz · Moritz Hardt