Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning

Incentivizing Honesty among Competitors in Collaborative Learning

Florian Dorner · Nikola Konstantinov · Georgi Pashaliev · Martin Vechev

Keywords: [ game theory ] [ federated learning ] [ Strategic Behavior ] [ Mechanisms ] [ Economics ] [ Optimization ]


Abstract:

Collaborative learning techniques have the potential to enable training machine learning models that are superior to models trained on a single entity’s data. However, in many cases, potential participants in such collaborative schemes are competitors on a downstream task, such as firms that each aim to attract customers by providing the best recommendations. This can incentivize dishonest updates that damage other participants' models, potentially undermining the benefits of collaboration. In this work, we formulate a game that models such interactions and study two learning tasks within this framework: single-round mean estimation and multi-round SGD on strongly-convex objectives. For a natural class of player actions, we show that rational clients are incentivized to strongly manipulate their updates, thus preventing learning. We then propose mechanisms that incentivize honest communication and ensure learning quality comparable to full cooperation. Our work shows that explicitly modeling the incentives and actions of dishonest clients, rather than assuming them malicious, can enable strong robustness guarantees for collaborative learning.

Chat is not available.