Timezone: »
Balancing privacy and accuracy is a major challenge in designing
differentially private machine learning algorithms. One way to improve this
tradeoff for free is to leverage the noise in common data operations that
already use randomness. Such operations include noisy SGD and data subsampling. The additional
noise in these operations may amplify the privacy guarantee of the overall
algorithm, a phenomenon known as {\em{privacy amplification}}. In this paper, we
analyze the privacy amplification of sampling from a multidimensional
Bernoulli distribution family given the parameter from a private
algorithm. This setup has applications to Bayesian inference and to data
compression. We
provide an algorithm to compute the amplification factor, and we
establish upper and lower bounds on this factor.
Author Information
Jacob Imola (UC San Diego)
Kamalika Chaudhuri (University of California at San Diego)
More from the Same Authors
-
2021 : Understanding Instance-based Interpretability of Variational Auto-Encoders »
· Zhifeng Kong · Kamalika Chaudhuri -
2021 : A Shuffling Framework For Local Differential Privacy »
Casey M Meehan · Amrita Roy Chowdhury · Kamalika Chaudhuri · Somesh Jha -
2021 : Privacy Amplification by Subsampling in Time Domain »
Tatsuki Koga · Casey M Meehan · Kamalika Chaudhuri -
2022 : Understanding Rare Spurious Correlations in Neural Networks »
Yao-Yuan Yang · Chi-Ning Chou · Kamalika Chaudhuri -
2022 Poster: Thompson Sampling for Robust Transfer in Multi-Task Bandits »
Zhi Wang · Chicheng Zhang · Kamalika Chaudhuri -
2022 Spotlight: Thompson Sampling for Robust Transfer in Multi-Task Bandits »
Zhi Wang · Chicheng Zhang · Kamalika Chaudhuri -
2022 Poster: Bounding Training Data Reconstruction in Private (Deep) Learning »
Chuan Guo · Brian Karrer · Kamalika Chaudhuri · Laurens van der Maaten -
2022 Oral: Bounding Training Data Reconstruction in Private (Deep) Learning »
Chuan Guo · Brian Karrer · Kamalika Chaudhuri · Laurens van der Maaten -
2021 : Discussion Panel #2 »
Bo Li · Nicholas Carlini · Andrzej Banburski · Kamalika Chaudhuri · Will Xiao · Cihang Xie -
2021 : Invited Talk #9 »
Kamalika Chaudhuri -
2021 : Invited Talk: Kamalika Chaudhuri »
Kamalika Chaudhuri -
2021 : Invited Talk: Kamalika Chaudhuri »
Kamalika Chaudhuri -
2021 : Live Panel Discussion »
Thomas Dietterich · Chelsea Finn · Kamalika Chaudhuri · Yarin Gal · Uri Shalit -
2021 Poster: Sample Complexity of Robust Linear Classification on Separated Data »
Robi Bhattacharjee · Somesh Jha · Kamalika Chaudhuri -
2021 Spotlight: Sample Complexity of Robust Linear Classification on Separated Data »
Robi Bhattacharjee · Somesh Jha · Kamalika Chaudhuri -
2021 Poster: Connecting Interpretability and Robustness in Decision Trees through Separation »
Michal Moshkovitz · Yao-Yuan Yang · Kamalika Chaudhuri -
2021 Spotlight: Connecting Interpretability and Robustness in Decision Trees through Separation »
Michal Moshkovitz · Yao-Yuan Yang · Kamalika Chaudhuri -
2020 Poster: When are Non-Parametric Methods Robust? »
Robi Bhattacharjee · Kamalika Chaudhuri -
2019 Talk: Opening Remarks »
Kamalika Chaudhuri · Ruslan Salakhutdinov -
2018 Poster: Active Learning with Logged Data »
Songbai Yan · Kamalika Chaudhuri · Tara Javidi -
2018 Poster: Analyzing the Robustness of Nearest Neighbors to Adversarial Examples »
Yizhen Wang · Somesh Jha · Kamalika Chaudhuri -
2018 Oral: Active Learning with Logged Data »
Songbai Yan · Kamalika Chaudhuri · Tara Javidi -
2018 Oral: Analyzing the Robustness of Nearest Neighbors to Adversarial Examples »
Yizhen Wang · Somesh Jha · Kamalika Chaudhuri -
2017 Workshop: Picky Learners: Choosing Alternative Ways to Process Data. »
Corinna Cortes · Kamalika Chaudhuri · Giulia DeSalvo · Ningshan Zhang · Chicheng Zhang -
2017 Poster: Active Heteroscedastic Regression »
Kamalika Chaudhuri · Prateek Jain · Nagarajan Natarajan -
2017 Talk: Active Heteroscedastic Regression »
Kamalika Chaudhuri · Prateek Jain · Nagarajan Natarajan