Skip to yearly menu bar Skip to main content


Spotlight

On the Convergence of the Shapley Value in Parametric Bayesian Learning Games

Lucas Agussurja · Xinyi Xu · Bryan Kian Hsiang Low

Room 318 - 320
[ ] [ Livestream: Visit Social Aspects/Optimization ]

Abstract:

Measuring contributions is a classical problem in cooperative game theory where the Shapley value is the most well-known solution concept. In this paper, we establish the convergence property of the Shapley value in parametric Bayesian learning games where players perform a Bayesian inference using their combined data, and the posterior-prior KL divergence is used as the characteristic function. We show that for any two players, under some regularity conditions, their difference in Shapley value converges in probability to the difference in Shapley value of a limiting game whose characteristic function is proportional to the log-determinant of the joint Fisher information. As an application, we present an online collaborative learning framework that is asymptotically Shapley-fair. Our result enables this to be achieved without any costly computations of posterior-prior KL divergences. Only a consistent estimator of the Fisher information is needed. Theeffectiveness of our framework is demonstrated with experiments using real-world data.

Chat is not available.