Poster
Relative Error Fair Clustering in the Weak-Strong Oracle Model
Vladimir Braverman · Prathamesh Dharangutte · Shaofeng Jiang · Hoai-An Nguyen · Chen Wang · Yubo Zhang · Samson Zhou
East Exhibition Hall A-B #E-1002
Clustering—grouping similar data points together—is a central task in machine learning. It's widely used in real-world applications, like recommending movies, summarizing customer data, or grouping job applicants. But clustering can go wrong in two key ways: it can be unfair (for example, placing people of one gender or race disproportionately in certain clusters), and it can be expensive when relying on accurate but costly computations.This paper tackles both problems at once. The authors study how to cluster data fairly while using a mix of cheap, potentially inaccurate information (from a weak source) and expensive, reliable information (from a strong source). Think of weak information as a quick estimate and strong information as the expensive but accurate output from a powerful model.Their main contribution is an efficient method to select a small, weighted summary of the data—called a coreset — that can be clustered instead of the full dataset. Crucially, this coreset supports fair clustering (ensuring balanced representation of subgroups like gender or race), and it can be built using only a small number of expensive strong queries. This means fairness and accuracy can be achieved without a huge computational cost. In short, this work helps make fairness in AI more affordable and practical—especially in settings where resources are limited and data is messy or unreliable.
Live content is unavailable. Log in and register to view live content