Decoupling Regularization and Privacy in Differentially Private Ridge Regression and ERM
Wanjie Wang ⋅ Tathagata Banerjee
Abstract
We study ridge regression and ridge-regularized empirical risk minimization (ERM) under $(\varepsilon,\delta)$-differential privacy via output perturbation. In classical private ERM, the ridge parameter simultaneously controls statistical regularization and the estimator’s global sensitivity. Larger regularization reduces the DP noise scale but increases bias. So choosing the tuning parameter becomes a privacy--accuracy bottleneck. We propose a framework that makes these two roles explicit by decoupling regularization into (i) a statistical penalty $\alpha$, defining the target ridge/ERM solution, and (ii) a privacy-stabilization parameter $c$, used only to enforce a curvature floor and hence a tight sensitivity bound. We apply this framework to ridge regression, where $c$ is used to boost the minimum eigenvalue of the empirical Gram matrix. We derive an explicit bias--variance--DP-variance risk decomposition and characterize optimal $(\alpha,c)$ in several regimes, yielding sharp tuning guidance and improved accuracy relative to single-parameter regularization. Finally, we extend the same decoupling principle to general ridge-regularized ERM. We support the theory with simulations.
Successful Page Load