Timezone: »

 
Privately Publishable Per-instance Privacy: An Extended Abstract
Rachel Redberg · Yu-Xiang Wang

We consider how to release personalized privacy losses using per-instance differential privacy (pDP), focusing on private empirical risk minimization over the class of generalized linear models. Standard differential privacy (DP) gives us a worst-case bound that might be orders of magnitude larger than the privacy loss to a particular individual relative to a fixed dataset. The pDP framework provides a more fine-grained analysis of the privacy guarantee to a target individual, but the per-instance privacy loss itself might be a function of sensitive data. In this paper, we analyze the per-instance privacy loss of releasing a private empirical risk minimizer learned via objective perturbation, and propose a group of methods to privately and accurately publish the pDP losses at little to no additional privacy cost.

Author Information

Rachel Redberg (UC Santa Barbara)
Yu-Xiang Wang (UC Santa Barbara)
Yu-Xiang Wang

Yu-Xiang Wang is the Eugene Aas Assistant Professor of Computer Science at UCSB. He runs the Statistical Machine Learning lab and co-founded the UCSB Center for Responsible Machine Learning. He is also visiting Amazon Web Services. Yu-Xiang’s research interests include statistical theory and methodology, differential privacy, reinforcement learning, online learning and deep learning.

More from the Same Authors