Timezone: »

On the Convergence of Deep Learning with Differential Privacy
Zhiqi Bu · Hua Wang · Qi Long · Weijie Su

In deep learning with differential privacy (DP), the neural network achieves the privacy usually at the cost of slower convergence (and thus lower performance) than its non-private counterpart. This work gives the first convergence analysis of the DP deep learning, through the lens of training dynamics and the neural tangent kernel (NTK). Our convergence theory successfully characterizes the effects of two key components in the DP training: the per-sample clipping (flat or layerwise) and the noise addition. Our analysis not only initiates a general principled framework to understand the DP deep learning with any network architecture and loss function, but also motivates a new clipping method -- the \textit{global clipping}, that significantly improves the convergence while preserving the same privacy guarantee as the existing \textit{local clipping}.

Author Information

Zhiqi Bu (University of Pennsylvania)
Hua Wang (The Wharton School, University of Pennsylvania)
Qi Long (University of Pennsylvania)
Weijie Su (University of Pennsylvania)

More from the Same Authors