Eliminating Solution Bias in Differentially Private Optimization
Abstract
Differentially private (DP) stochastic optimization algorithms are widely used in privacy-preserving deep learning, where per-sample gradient clipping and noise injection protect sensitive information. However, these operations limit existing DP algorithms to converge within a constant-radius neighborhood of the first-order stationary point, leading to solution bias and the well-known privacy-utility trade-off. To enhance model utility, we propose a novel algorithmic framework called DP-C4, which is designed to be error-Consistently-vanishing, Coupledly-clipped, solution-Calibrated, and Convergence-guaranteed. Specifically, it incorporates a carefully designed coupled clipping scheme with shifted threshold strategy, ensuring that both clipping bias and noise variance asymptotically vanish, thereby eliminating the DP-induced solution bias. Moreover, we extend existing sensitivity analysis techniques and develop a tailored privacy budget allocation to guarantee the privacy of DP-C4. Compared with the well-recognized DP-SGD, our framework injects significantly less noise under the same privacy level. In addition, we prove that our framework converges to the optimum in strongly-convex case and to a diminishing neighborhood of the first-order stationary point in non-convex case. Experiments show that DP-C4 achieves superior privacy-utility trade-off over existing baselines across various tasks and datasets.