Skip to yearly menu bar Skip to main content


Poster

Bayesian Differential Privacy for Machine Learning

Aleksei Triastcyn · Boi Faltings

Keywords: [ Other ] [ Privacy-preserving Statistics and Machine Learning ]


Abstract:

Traditional differential privacy is independent of the data distribution. However, this is not well-matched with the modern machine learning context, where models are trained on specific data. As a result, achieving meaningful privacy guarantees in ML often excessively reduces accuracy. We propose Bayesian differential privacy (BDP), which takes into account the data distribution to provide more practical privacy guarantees. We also derive a general privacy accounting method under BDP, building upon the well-known moments accountant. Our experiments demonstrate that in-distribution samples in classic machine learning datasets, such as MNIST and CIFAR-10, enjoy significantly stronger privacy guarantees than postulated by DP, while models maintain high classification accuracy.

Chat is not available.