Oral
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning
CertViT: Certified Robustness of Pre-Trained Vision Transformers
Keywords: [ certified robustness ] [ Optimization ] [ Adversarial Attacks ] [ transformers ]
Lipschitz bounded neural networks are certifiably robust and have a good trade-off between clean and certified accuracy. Existing Lipschitz bounding methods train from scratch and are limited to moderately sized networks (< 6M parameters). They require a fair amount of hyper-parameter tuning and are computationally prohibitive for large networks like Vision Transformers (5M to 660M parameters). Obtaining certified robustness of transformers is not feasible due to the non-scalability and inflexibility of the current methods. This work presents CertViT, a two-step proximal-projection method to achieve certified robustness from pre-trained weights. The proximal step tries to lower the Lipschitz bound and the projection step tries to maintain the clean accuracy of pre-trained weights. We show that CertViT networkshave better certified accuracy than state-of-the-art Lipschitz trained networks. We apply CertViT on several variants of pre-trained vision transformers and show adversarial robustness using standard attacks. Code : \url{https://github.com/sagarverma/transformer-lipschitz}