Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

Improving the Efficiency of Self-Supervised Adversarial Training through Latent Clustering-based Selection

Somrita Ghosh · Yuelin Xu · Xiao Zhang

Keywords: [ Adversarial Robustness ] [ self-supervised learning ] [ Data efficiency ]


Abstract:

Compared to standard learning, adversarially robust learning is widely recognized to require a much larger training dataset. Recent works utilize external or synthetically generated unlabeled data in adversarial training using self-supervised learning. Despite achieving enhanced robustness, these methods typically require a considerable amount of additional data, leading to substantial memory consumption and convergence time. To address the space and computational challenges, we propose a novel Latent Clustering-based Selection scheme (LCS) to strategically select a small core subset of unlabeled data critical for obtaining better robustness. In particular, our method prioritizes selecting unlabeled data that are close to the model's decision boundary, while balancing the ratio between the boundary and the remaining data points to avoid overfitting. Our experiments show that when incorporated into self-supervised adversarial training, our LCS scheme can significantly reduce the memory and time complexities while achieving comparable model robustness.

Chat is not available.