My Fair Bandit: Distributed Learning of Max-Min Fairness with Multi-player Bandits

Ilai Bistritz · Tavor Z Baharav · Amir Leshem · Nicholas Bambos

Keywords: [ Multiagent Learning ] [ Online Learning / Bandits ] [ Online Learning, Active Learning, and Bandits ]

[ Abstract ]
Wed 15 Jul noon PDT — 12:45 p.m. PDT
Wed 15 Jul 11 p.m. PDT — 11:45 p.m. PDT


Consider N cooperative but non-communicating players where each plays one out of M arms for T turns. Players have different utilities for each arm, representable as an NxM matrix. These utilities are unknown to the players. In each turn players receive noisy observations of their utility for their selected arm. However, if any other players selected the same arm that turn, they will all receive zero utility due to the conflict. No other communication or coordination between the players is possible. Our goal is to design a distributed algorithm that learns the matching between players and arms that achieves max-min fairness while minimizing the regret. We present an algorithm and prove that it is regret optimal up to a \log\log T factor. This is the first max-min fairness multi-player bandit algorithm with (near) order optimal regret.

Chat is not available.