Gradient Inversion Attacks Beyond SGD
Abstract
Gradient Inversion Attack (GIA) poses a significant threat to federated learning, enabling adversaries to reconstruct private training data from the information shared during training. Prior research has predominantly focused on the vanilla SGD, where the server or an eavesdropper can directly observe true gradients. In practical deployments, however, models may be trained with adaptive optimizers (e.g., Adam, RMSProp, and AdaGrad), for which the observable signal is not raw gradients but momentum-based parameter updates. This setting remains underexplored and undermines traditional gradient-matching strategies, which struggle to recover labels and images from non-gradient updates. To address this gap, this paper explores attacks tailored to modern adaptive optimizers. We present an analytical rule for recovering labels from optimizer updates and propose an update-matching objective that optimizes dummy inputs to reproduce the observed updates. The proposed approach is general and can be directly applied to various optimizers such as Adam, AdaGrad, and RMSProp. Furthermore, we find that, despite being introduced for adaptive optimizers, the proposed objective function also yields stronger attacks in the standard SGD setting. Experiments on datasets such as ImageNet and PACS highlight the effectiveness of our method over existing gradient matching techniques.