Timezone: »
Bayesian persuasion studies how an informed sender should partially disclose information to influence the behavior of a self-interested receiver. Classical models make the stringent assumption that the sender knows the receiver’s utility. This can be relaxed by considering an online learning framework in which the sender repeatedly faces a receiver of an unknown, adversarially selected type. We study, for the first time, an online Bayesian persuasion setting with multiple receivers. We focus on the case with no externalities and binary actions, as customary in offline models. Our goal is to design no-regret algorithms for the sender with polynomial per-iteration running time. First, we prove a negative result: for any 0 < α ≤ 1, there is no polynomial-time no-α-regret algorithm when the sender’s utility function is supermodular or anonymous. Then, we focus on the setting of submodular sender’s utility functions and we show that, in this case, it is possible to design a polynomial-time no-(1-1/e)-regret algorithm. To do so, we introduce a general online gradient descent framework to handle online learning problems with a finite number of possible loss functions. This requires the existence of an approximate projection oracle. We show that, in our setting, there exists one such projection oracle which can be implemented in polynomial time.
Author Information
Matteo Castiglioni (Politecnico di Milano)
Alberto Marchesi (Politecnico di Milano)
Andrea Celli (Facebook CDS)
Nicola Gatti (Politecnico di Milano)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Multi-Receiver Online Bayesian Persuasion »
Thu. Jul 22nd 12:25 -- 12:30 AM Room
More from the Same Authors
-
2023 Poster: Optimal Rates and Efficient Algorithms for Online Bayesian Persuasion »
Martino Bernasconi · Matteo Castiglioni · Andrea Celli · Alberto Marchesi · Francesco Trovò · Nicola Gatti -
2023 Poster: Online Mechanism Design for Information Acquisition »
Federico Cacciamani · Matteo Castiglioni · Nicola Gatti -
2023 Poster: Constrained Phi-Equilibria »
Martino Bernasconi · Matteo Castiglioni · Alberto Marchesi · Francesco Trovò · Nicola Gatti -
2022 Poster: Online Learning with Knapsacks: the Best of Both Worlds »
Matteo Castiglioni · Andrea Celli · Christian Kroer -
2022 Poster: Safe Learning in Tree-Form Sequential Decision Making: Handling Hard and Soft Constraints »
Martino Bernasconi · Federico Cacciamani · Matteo Castiglioni · Alberto Marchesi · Nicola Gatti · Francesco Trovò -
2022 Poster: A Marriage between Adversarial Team Games and 2-player Games: Enabling Abstractions, No-regret Learning, and Subgame Solving »
Luca Carminati · Federico Cacciamani · Marco Ciccone · Nicola Gatti -
2022 Spotlight: Online Learning with Knapsacks: the Best of Both Worlds »
Matteo Castiglioni · Andrea Celli · Christian Kroer -
2022 Spotlight: A Marriage between Adversarial Team Games and 2-player Games: Enabling Abstractions, No-regret Learning, and Subgame Solving »
Luca Carminati · Federico Cacciamani · Marco Ciccone · Nicola Gatti -
2022 Spotlight: Safe Learning in Tree-Form Sequential Decision Making: Handling Hard and Soft Constraints »
Martino Bernasconi · Federico Cacciamani · Matteo Castiglioni · Alberto Marchesi · Nicola Gatti · Francesco Trovò -
2021 Poster: Connecting Optimal Ex-Ante Collusion in Teams to Extensive-Form Correlation: Faster Algorithms and Positive Complexity Results »
Gabriele Farina · Andrea Celli · Nicola Gatti · Tuomas Sandholm -
2021 Spotlight: Connecting Optimal Ex-Ante Collusion in Teams to Extensive-Form Correlation: Faster Algorithms and Positive Complexity Results »
Gabriele Farina · Andrea Celli · Nicola Gatti · Tuomas Sandholm