Poster
in
Workshop: Theory and Practice of Differential Privacy
Differentially Private Bayesian Neural Network
Zhiqi Bu · Qiyiwen Zhang · Kan Chen · Qi Long
Bayesian neural network (BNN) allows for uncertainty quantification in prediction, offering an advantage over regular neural networks that has not been explored in the differential privacy (DP) framework. We fill this important gap by leveraging recent development in Bayesian deep learning and privacy accounting to offer a more precise analysis of the trade-off between privacy and accuracy in BNN. We propose three DP-BNNs that characterize the weight uncertainty for the same network architecture in distinct ways, namely DP-SGLD (via the noisy gradient method), DP-BBP (via changing the parameters of interest) and DP-MC Dropout (via the model architecture). Interestingly, we show a new equivalence between DP-SGD and DP-SGLD, implying that some non-Bayesian DP training naturally allows for uncertainty quantification. However, the hyperparameters such as learning rate and batch size, can have different or even opposite effects in DP and non-DP training.
Extensive experiments show that the sampling method (DP-SGLD) significantly outperforms optimization methods (DP-BBP and DP-MC Dropout) in terms of privacy guarantee, prediction accuracy, uncertainty quantification, computation speed, and generalizability. When compared to non-DP and non-Bayesian approaches, DP-SGLD loses remarkably little performance, demonstrating the great potential of DP-BNN in real tasks.