Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Formal Verification of Machine Learning

Formal Privacy Guarantees for Neural Network queries by estimating local Lipschitz constant

Abhishek Singh · Praneeth Vepakomma · Vivek Sharma · Ramesh Raskar


Abstract:

Cloud based machine learning inference is an emerging paradigm where users share their data with a service provider. Due to increased concerns over data privacy, several recent works have proposed using Adversarial Representation Learning (ARL) to learn a privacy-preserving encoding of sensitive user data before it is shared with an untrusted service provider. Traditionally, the privacy of these encodings is evaluated empirically as they lack formal guarantees. In this work, we develop a framework that provides formal privacy guarantees for an arbitrarily trained neural network by linking its local Lipschitz constant with its local sensitivity. To use local sensitivity for guaranteeing privacy, we extend the Propose-Test-Release~(PTR) framework to make it compatible and tractable for neural network based queries. We verify the efficacy of our framework on real world datasets, and elucidate the role of ARL in improving the privacy-utility tradeoff.

Chat is not available.