Privacy Isn’t Free: Benchmarking the Systems Cost of Privacy-Preserving ML
Abstract
Privacy-preserving machine learning techniques are increasingly deployed in hybrid combinations, yet their system-level interactions remain poorly understood. We introduce PRIVACYBENCH, a comprehensive framework that reveals non-additive behaviors in privacy technique combinations with significant performance and resource implications. Evaluating Federated Learning (FL), Differential Privacy (DP), and Secure Multi-Party Computation (SMPC) across ResNet18 and ViT models on medical datasets, we uncover striking disparities: while FL and FL+SMPC preserve utility with modest overhead, FL+DP combos exhibit severe convergence issues—accuracy drops from 98% to 13%, training time increases 16×, and energy consumption rises 20×. PRIVACYBENCH provides the first systematic evaluation framework to jointly track utility, cost, and environmental impact across privacy configs. These findings demonstrate that privacy techniques cannot be treated as modular components and highlight critical considerations for deploying privacy-preserving ML systems in resource-constrained environments.