Timezone: »
The sample complexity of Adversarial training is known to be significantly higher than standard ERM based training. Although complex augmentation techniques have led to large gains in standard training, they have not been successful with Adversarial Training. In this work, we propose Diverse Augmentation based Joint Adversarial Training (DAJAT) that uses a combination of simple and complex augmentations with separate batch normalization layers to handle the conflicting goals of enhancing the diversity of the training dataset, while being close to the test distribution. We further introduce a Jensen-Shannon divergence loss to encourage the joint learning of the diverse augmentations, thereby allowing simple augmentations to guide the learning of complex ones. Lastly, to improve the computational efficiency of the proposed method, we propose and utilize a two-step defense, Ascending Constraint Adversarial Training (ACAT) that uses an increasing epsilon schedule and weight-space smoothing to prevent gradient masking. The proposed method achieves better performance compared to existing methods on the RobustBench Leaderboard for CIFAR-10 and CIFAR-100 on ResNet-18 and WideResNet-34-10 architectures.
Author Information
Sravanti Addepalli (Indian Institute of Science)
Samyak Jain (Indian Institute of Technology (BHU), Varanasi)
Venkatesh Babu Radhakrishnan (Indian Institute of Science)
More from the Same Authors
-
2021 : Towards Achieving Adversarial Robustness Beyond Perceptual Limits »
Sravanti Addepalli · Samyak Jain · Gaurang Sriramanan · Shivangi Khare · Venkatesh Babu Radhakrishnan -
2022 : DAFT: Distilling Adversarially Fine-tuned teachers for OOD Robustness »
Anshul Nasery · Sravanti Addepalli · Praneeth Netrapalli · Prateek Jain -
2022 : DAFT: Distilling Adversarially Fine-tuned teachers for OOD Robustness »
Anshul Nasery · Sravanti Addepalli · Praneeth Netrapalli · Prateek Jain -
2022 Poster: A Closer Look at Smoothness in Domain Adversarial Training »
Harsh Rangwani · Sumukh K Aithal · Mayank Mishra · Arihant Jain · Venkatesh Babu Radhakrishnan -
2022 Poster: Balancing Discriminability and Transferability for Source-Free Domain Adaptation »
Jogendra Nath Kundu · Akshay Kulkarni · Suvaansh Bhambri · Deepesh Mehta · Shreyas Kulkarni · Varun Jampani · Venkatesh Babu Radhakrishnan -
2022 Spotlight: Balancing Discriminability and Transferability for Source-Free Domain Adaptation »
Jogendra Nath Kundu · Akshay Kulkarni · Suvaansh Bhambri · Deepesh Mehta · Shreyas Kulkarni · Varun Jampani · Venkatesh Babu Radhakrishnan -
2022 Spotlight: A Closer Look at Smoothness in Domain Adversarial Training »
Harsh Rangwani · Sumukh K Aithal · Mayank Mishra · Arihant Jain · Venkatesh Babu Radhakrishnan -
2019 Poster: Zero-Shot Knowledge Distillation in Deep Networks »
Gaurav Kumar Nayak · Konda Reddy Mopuri · Vaisakh Shaj · Venkatesh Babu Radhakrishnan · Anirban Chakraborty -
2019 Oral: Zero-Shot Knowledge Distillation in Deep Networks »
Gaurav Kumar Nayak · Konda Reddy Mopuri · Vaisakh Shaj · Venkatesh Babu Radhakrishnan · Anirban Chakraborty