PRISM: Gauge-Invariant Tangent-Space Differentially Private LoRA
Shihao Wang ⋅ Xueru Zhang
Abstract
Applying differential privacy (DP) via DP-SGD to Low-Rank Adaptation (LoRA) is a natural approach for privacy-preserving fine-tuning. However, applying DP-SGD to LoRA poses a fundamental challenge due to its low-rank parameterization. In LoRA, each trainable update is represented as a low-rank matrix $Z=AB^\top$, but this factorization is non-identifiable. As a result, applying DP-SGD directly to factors $(A,B)$ induces gauge-dependent perturbations on $Z$, leading to uncontrolled noise amplification. We propose **PRISM**, an intrinsic DP mechanism for LoRA that is gauge invariant by construction, avoids bilinear noise amplification, and admits an efficient low-dimensional noise sampler. Moreover, PRISM yields a closed-form characterization for the effective intrinsic noise on $Z$, and enables stable privacy–utility trade-offs by being gauge invariant and keeping noise amplification bounded. We further characterize the noise amplification incurred by naive DP-LoRA and show that it can be unbounded, establish standard $(\varepsilon,\delta)$-DP guarantees for PRISM, and introduce a DP-aware, gauge-invariant adaptive update that avoids amplifying injected privacy noise under adaptive optimization, improving numerical stability in practice.
Successful Page Load