Timezone: »

 
Spotlight
Ditto: Fair and Robust Federated Learning Through Personalization
Tian Li · Shengyuan Hu · Ahmad Beirami · Virginia Smith

Tue Jul 20 07:45 PM -- 07:50 PM (PDT) @

Fairness and robustness are two important concerns for federated learning systems. In this work, we identify that robustness to data and model poisoning attacks and fairness, measured as the uniformity of performance across devices, are competing constraints in statistically heterogeneous networks. To address these constraints, we propose employing a simple, general framework for personalized federated learning, Ditto, that can inherently provide fairness and robustness benefits, and develop a scalable solver for it. Theoretically, we analyze the ability of Ditto to achieve fairness and robustness simultaneously on a class of linear problems. Empirically, across a suite of federated datasets, we show that Ditto not only achieves competitive performance relative to recent personalization methods, but also enables more accurate, robust, and fair models relative to state-of-the-art fair or robust baselines.

Author Information

Tian Li (Carnegie Mellon University)
Shengyuan Hu (Carnegie Mellon University)
Ahmad Beirami (Facebook AI)
Virginia Smith (Carnegie Mellon University)
Virginia Smith

Virginia Smith is an assistant professor in the Machine Learning Department at Carnegie Mellon University, and a courtesy faculty member in the Electrical and Computer Engineering Department. Her research interests span machine learning, optimization, and distributed systems. Prior to CMU, Virginia was a postdoc at Stanford University, received a Ph.D. in Computer Science from UC Berkeley, and obtained undergraduate degrees in Mathematics and Computer Science from the University of Virginia.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors