Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Theory and Practice of Differential Privacy

On the Convergence of Deep Learning with Differential Privacy

Zhiqi Bu · Hua Wang · Qi Long · Weijie Su


Abstract:

In deep learning with differential privacy (DP), the neural network achieves the privacy usually at the cost of slower convergence (and thus lower performance) than its non-private counterpart. This work gives the first convergence analysis of the DP deep learning, through the lens of training dynamics and the neural tangent kernel (NTK). Our convergence theory successfully characterizes the effects of two key components in the DP training: the per-sample clipping (flat or layerwise) and the noise addition. Our analysis not only initiates a general principled framework to understand the DP deep learning with any network architecture and loss function, but also motivates a new clipping method -- the \textit{global clipping}, that significantly improves the convergence while preserving the same privacy guarantee as the existing \textit{local clipping}.

Chat is not available.