Position: The Privacy-Auditability Paradox in Federated Learning: Why We Need Controllable Secure Aggregation
Abstract
Federated Learning (FL) has become the de facto standard for privacy-preserving intelligence, largely due to Secure Aggregation protocols that guarantee the mathematical invisibility of individual user contributions. However, we contend that this pursuit of perfect privacy has engineered a systemic vulnerability: the Privacy-Auditability Paradox. By rendering user updates computationally indistinguishable, current protocols create a "Sanitization Gap" where malicious poisoning is undetectable and a "Regulatory Dead Zone" where compliance with the EU AI Act's robustness and explainability mandates is mathematically impossible. In this position paper, we argue that the community must transition from "Blind Aggregation" to Controllable Secure Aggregation (CSA). We propose a cryptographic paradigm shift utilizing Decentralized Multi-Client Functional Encryption and Zero-Knowledge Proofs (ZKPs) to replace binary secrecy with fine-grained policy-based governance. This framework introduces "Verified Blindness", where the server remains blind to raw data by default but possesses a cryptographically regulated "Break-Glass" mechanism to audit specific inputs under consensus-based governance. We conclude that adopting CSA is not merely a technical upgrade but an existential necessity to transform Federated Learning from an unregulated academic concept into robust, compliant, and trustworthy critical infrastructure.