Timezone: »

Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline Reinforcement Learning
Boxiang Lyu · Zhaoran Wang · Mladen Kolar · Zhuoran Yang

Tue Jul 19 01:20 PM -- 01:25 PM (PDT) @ None

In dynamic mechanism design, agents are allowed to interact with the seller over multiple rounds, where agents’ reward functions may change with time and may depend on the seller’s states. The interaction between agents and sellers can be modeled as a Markov Decision Process (MDP). We focus on the setting where the reward and transition functions of such an MDP are not known a priori and we are attempting to recover the optimal mechanism using a dataset collected a priori.In the setting where function approximation is employed to handle large state spaces, with only mild assumptions on the expressiveness of the function class, we are able to design a dynamic mechanism using offline reinforcement learning algorithms. Moreover, the learned mechanism are proved to approximately enjoy three key mechanism design desiderata, namely efficiency, individual rationality, and truthfulness. Our algorithm is based on the pessimism principle and only requires a mild assumption on the coverage of the offline dataset. To our best knowledge, our work provides the first offline RL algorithm for dynamic mechanism design without having a uniformly explorative dataset.

Author Information

Boxiang Lyu (University of Chicago Booth School of Business)
Zhaoran Wang (Northwestern University)
Mladen Kolar (University of Chicago Booth School of Business)
Zhuoran Yang (Yale University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors