Optimal conversion from Rényi Differential Privacy to $f$-Differential Privacy
Anneliese Riess ⋅ Felipe Gomez ⋅ Flavio Calmon ⋅ Julia Schnabel ⋅ Georgios Kaissis
Abstract
We prove the conjecture stated in Appendix F.3 of Zhu et al.: among all conversion rules that map a Rényi Differential Privacy (RDP) profile $\tau \mapsto \rho(\tau)$ to a valid hypothesis-testing trade-off $f$ (or equivalently, an $(\varepsilon,\delta)$-Differential Privacy curve), the rule based on the intersection of single-order RDP privacy regions is optimal. This optimality holds simultaneously for all valid RDP profiles and for all Type I error levels $\alpha$. Concretely, we show that in the space of trade-off functions, the tightest possible bound is $f_{\rho(\cdot)}(\alpha) = \sup_{\tau \geq 0.5} f_{\tau,\rho(\tau)}(\alpha)$: the pointwise maximum of the single-order bounds for each RDP privacy region. Our proof unifies and sharpens the insights of Balle et al., Asoodeh et al., and Zhu et al.. Our analysis relies on a precise geometric characterization of the RDP privacy region, leveraging its convexity and the fact that its boundary is determined exclusively by Bernoulli mechanisms. Our results establish that the "intersection-of-RDP-privacy-regions" rule is not only valid, but optimal: no other black-box conversion can uniformly dominate it in the Blackwell sense, marking the fundamental limit of what can be inferred about a mechanism's privacy solely from its RDP guarantees.
Successful Page Load