Censoring with Plausible Deniability: Asymmetric Local Privacy for Multi-Category CDF Estimation
Abstract
We introduce a new mechanism within the Utility-Optimized Local Differential Privacy (ULDP) framework that enables censoring with plausible deniability when collecting and analyzing sensitive data. Our approach addresses scenarios where certain values, such as large numerical responses, are more privacy-sensitive than others, while accompanying categorical information may not be private on its own but could still be identifying. The mechanism selectively withholds identifying details when a response might indicate sensitive content, offering asymmetric privacy protection. Unlike previous methods, it avoids the need to predefine which values are sensitive, making it more adaptable and practical. Although the mechanism is designed for ULDP, it can also be applied under symmetric LDP settings, where it still benefits from censoring and reduced privacy cost. We provide theoretical guarantees, including uniform consistency and pointwise weak convergence results. Numerical experiments on both synthetic data and real-world data were conducted demonstrate the validity of developed methodologies.