Workshop: Information-Theoretic Methods for Rigorous, Responsible, and Reliable Machine Learning (ITR3)
Active privacy-utility trade-off against a hypothesis testing adversary
Ecenaz Erdemir · Pier Luigi Dragotti · Deniz Gunduz
We consider privacy-utility trade-off for a time-series data sharing scenario, where a user releases her data containing some personal information in return of a service. We model user's personal information as two correlated random variables, one of them, called the
secret variable'', is to be kept private, while the other, called theuseful variable'', is to be disclosed for utility. We consider active sequential data release, where the user chooses from among a finite set of release mechanisms with different statistics in such a way that the latent useful hypothesis is maximally revealed to the legitimate receiver while the confidence in the true secret variable is kept below a predefined level. For the utility, we consider both the probability of correct detection of the useful variable and the mutual information (MI) between the useful variable and released data. We formulate both problems as a Markov decision process (MDP), and numerically solve them by advantage actor-critic (A2C) deep reinforcement learning (RL).