Skip to yearly menu bar Skip to main content

Workshop: Workshop on Reinforcement Learning Theory

Bridging The Gap between Local and Joint Differential Privacy in RL

Evrard Garcelon · Vianney Perchet · Ciara Pike-Burke · Matteo Pirotta


In this paper, we study privacy in the context of finite-horizon Markov Decision Processes. Two notions of privacy have been investigated in this setting: joint differential privacy (JDP) and local differential privacy (LDP). We show that it is possible to achieve a smooth transition in terms of privacy and regret (i.e., utility) between JDP and LDP. By leveraging shuffling techniques, we present an algorithm that, depending on the provided parameter, is able to attain any privacy/utility value in between the pure JDP and LDP guarantee.

Chat is not available.