PrivCode++ : Latent-Conditioned Differentially Private Code Generation for Comprehensive Guarantees
Abstract
Large language models fine-tuned on instruction–code pairs may memorize and subsequently leak sensitive training data. Existing differentially private (DP) code generation methods primarily protect code snippets while assuming prompts are public, which fails in realistic scenarios where prompts may also contain sensitive information. When prompts cannot be explicitly learned or used during generation, code synthesis suffers from severe utility degradation and reduced diversity. To address these challenges, we propose PrivCode++, the first work to explore DP code generation under where both prompts and code snippets are considered sensitive in LLM fine-tuning. PrivCode++ introduces a two-stage DP framework with a Privacy-Free Latent Conditioning module, enabling effective DP fine-tuning and data synthesis without direct access to sensitive prompts or code. Extensive experiments show that PrivCode++ achieves substantially higher utility than baselines, remains competitive with the method with relaxing privacy assumptions, and provides stronger privacy guarantees.