Timezone: »
In this paper, we propose a natural notion of individual preference (IP) stability for clustering, which asks that every data point, on average, is closer to the points in its own cluster than to the points in any other cluster. Our notion can be motivated from several perspectives, including game theory and algorithmic fairness. We study several questions related to our proposed notion. We first show that deciding whether a given data set allows for an IP-stable clustering in general is NP-hard. As a result, we explore the design of efficient algorithms for finding IP-stable clusterings in some restricted metric spaces. We present a polytime algorithm to find a clustering satisfying exact IP-stability on the real line, and an efficient algorithm to find an IP-stable 2-clustering for a tree metric. We also consider relaxing the stability constraint, i.e., every data point should not be too far from its own cluster compared to any other cluster. For this case, we provide polytime algorithms with different guarantees. We evaluate some of our algorithms and several standard clustering approaches on real data sets.
Author Information
Saba Ahmadi (Toyota Technological Institute at Chicago)
Pranjal Awasthi (Google)
Samir Khuller (Northwestern University)
Matthäus Kleindessner (Amazon)
Jamie Morgenstern (U Washington)
Pattara Sukprasert (Northwestern University)
Ali Vakilian (Toyota Technological Institute at Chicago)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Oral: Individual Preference Stability for Clustering »
Tue. Jul 19th 06:15 -- 06:35 PM Room Room 310
More from the Same Authors
-
2023 Poster: When do Minimax-fair Learning and Empirical Risk Minimization Coincide? »
Harvineet Singh · Matthäus Kleindessner · Volkan Cevher · Rumi Chunara · Chris Russell -
2023 Poster: Sequential Strategic Screening »
Lee Cohen · Saeed Sharifi-Malvajerdi · Kevin Stangl · Ali Vakilian · Juba Ziani -
2023 Poster: Approximation Algorithms for Fair Range Clustering »
Sedjro Salomon Hotegni · Sepideh Mahabadi · Ali Vakilian -
2022 Workshop: Principles of Distribution Shift (PODS) »
Elan Rosenfeld · Saurabh Garg · Shibani Santurkar · Jamie Morgenstern · Hossein Mobahi · Zachary Lipton · Andrej Risteski -
2022 Poster: Score Matching Enables Causal Discovery of Nonlinear Additive Noise Models »
Paul Rolland · Volkan Cevher · Matthäus Kleindessner · Chris Russell · Dominik Janzing · Bernhard Schölkopf · Francesco Locatello -
2022 Poster: Faster Fundamental Graph Algorithms via Learned Predictions »
Justin Chen · Sandeep Silwal · Ali Vakilian · Fred Zhang -
2022 Poster: Do More Negative Samples Necessarily Hurt In Contrastive Learning? »
Pranjal Awasthi · Nishanth Dikkala · Pritish Kamath -
2022 Oral: Do More Negative Samples Necessarily Hurt In Contrastive Learning? »
Pranjal Awasthi · Nishanth Dikkala · Pritish Kamath -
2022 Spotlight: Faster Fundamental Graph Algorithms via Learned Predictions »
Justin Chen · Sandeep Silwal · Ali Vakilian · Fred Zhang -
2022 Oral: Score Matching Enables Causal Discovery of Nonlinear Additive Noise Models »
Paul Rolland · Volkan Cevher · Matthäus Kleindessner · Chris Russell · Dominik Janzing · Bernhard Schölkopf · Francesco Locatello -
2022 Poster: Congested Bandits: Optimal Routing via Short-term Resets »
Pranjal Awasthi · Kush Bhatia · Sreenivas Gollapudi · Kostas Kollias -
2022 Poster: Agnostic Learnability of Halfspaces via Logistic Loss »
Ziwei Ji · Kwangjun Ahn · Pranjal Awasthi · Satyen Kale · Stefani Karp -
2022 Oral: Agnostic Learnability of Halfspaces via Logistic Loss »
Ziwei Ji · Kwangjun Ahn · Pranjal Awasthi · Satyen Kale · Stefani Karp -
2022 Spotlight: Congested Bandits: Optimal Routing via Short-term Resets »
Pranjal Awasthi · Kush Bhatia · Sreenivas Gollapudi · Kostas Kollias -
2022 Poster: H-Consistency Bounds for Surrogate Loss Minimizers »
Pranjal Awasthi · Anqi Mao · Mehryar Mohri · Yutao Zhong -
2022 Poster: Active Sampling for Min-Max Fairness »
Jacob Abernethy · Pranjal Awasthi · Matthäus Kleindessner · Jamie Morgenstern · Chris Russell · Jie Zhang -
2022 Oral: H-Consistency Bounds for Surrogate Loss Minimizers »
Pranjal Awasthi · Anqi Mao · Mehryar Mohri · Yutao Zhong -
2022 Spotlight: Active Sampling for Min-Max Fairness »
Jacob Abernethy · Pranjal Awasthi · Matthäus Kleindessner · Jamie Morgenstern · Chris Russell · Jie Zhang -
2020 Poster: A Pairwise Fair and Community-preserving Approach to k-Center Clustering »
Brian Brubach · Darshan Chakrabarti · John P Dickerson · Samir Khuller · Aravind Srinivasan · Leonidas Tsepenekas -
2019 Poster: Fair k-Center Clustering for Data Summarization »
Matthäus Kleindessner · Pranjal Awasthi · Jamie Morgenstern -
2019 Poster: Guarantees for Spectral Clustering with Fairness Constraints »
Matthäus Kleindessner · Samira Samadi · Pranjal Awasthi · Jamie Morgenstern -
2019 Oral: Guarantees for Spectral Clustering with Fairness Constraints »
Matthäus Kleindessner · Samira Samadi · Pranjal Awasthi · Jamie Morgenstern -
2019 Oral: Fair k-Center Clustering for Data Summarization »
Matthäus Kleindessner · Pranjal Awasthi · Jamie Morgenstern -
2018 Poster: Crowdsourcing with Arbitrary Adversaries »
Matthäus Kleindessner · Pranjal Awasthi -
2018 Oral: Crowdsourcing with Arbitrary Adversaries »
Matthäus Kleindessner · Pranjal Awasthi