Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Agentic Markets Workshop

Learning Stable Allocations of Strictly Convex Stochastic Cooperative Games

Nam Tran · The-Anh Ta · Shuqing Shi · Debmalya Mandal · Yali Du · Long Tran-Thanh


Abstract:

Reward allocation has been an important topic in economics, engineering, and machine learning. An important concept in reward allocation is the core, which is the set of stable allocations where no agent has the motivation to deviate from the grand coalition. In previous works, computing the core requires the complete knowledge of the game. However, this is unrealistic, as outcome of the game is often partially known and may be subject to uncertainty. In this paper, we consider the core learning problem in stochastic cooperative games, where the reward distribution is unknown. Our goal is to learn the expected core, that is, the set of allocations that are stable in expectation, given an oracle that returns a stochastic reward for an enquired coalition each round. Within the class of strictly convex games, we present an algorithm that returns a point in the expected core given a polynomial number of samples, with high probability. To analyse the algorithm, we develop a new extension of the separation hyperplane theorem for multiple convex sets.

Chat is not available.