Skip to yearly menu bar Skip to main content


Poster

Pursuing Overall Welfare in Federated Learning through Sequential Decision Making

Seok-Ju Hahn · Gi-Soo Kim · Junghye Lee


Abstract: In traditional federated learning, not all clients can be equally benefited from a trained global model. Therefore, the need to achieve the *client-level fairness* in a federated learning system has been emphasized, which can be realized by modifying the static aggregation scheme for updating the global model to an adaptive one, in response to the local signals of the participating clients.Our work reveals that existing fairness-aware aggregation strategies can be unified into an online convex optimization framework, in other words, a central server's *sequential decision making* process.To enhance the decision making capability, we propose simple and intuitive improvements for suboptimal designs within existing methods, bringing into **AAggFF**. Considering practical requirements, we further subdivide our method tailored for the *cross-device* and the *cross-silo* settings, respectively. Theoretical analyses guarantee sublinear regret upper bounds for both settings: $\mathcal{O}(\sqrt{T \log{K}})$ for the cross-device setting, and $\mathcal{O}(K \log{T})$ for the cross-silo setting, with $K$ clients and $T$ federation rounds. Extensive experiments demonstrate that the federated learning system equipped with **AAggFF** achieves a better degree of client-level fairness than existing methods in both practical settings.

Live content is unavailable. Log in and register to view live content