Skip to yearly menu bar Skip to main content


Poster

Quadratic Upper Bound for Boosting Robustness

Euijin You · Hyang-Won Lee

East Exhibition Hall A-B #E-2301
[ ] [ ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Fast adversarial training (FAT) aims to enhance the robustness of models against adversarial attacks with reduced training time, however, FAT often suffers from compromised robustness due to insufficient exploration of adversarial space. In this paper, we develop a loss function to mitigate the problem of degraded robustness under FAT. Specifically, we derive a quadratic upper bound (QUB) on the adversarial training (AT) loss function and propose to utilize the bound with existing FAT methods. Our experimental results show that applying QUB loss to the existing methods yields significant improvement of robustness. Furthermore, using various metrics, we demonstrate that this improvement is likely to result from the smoothened loss landscape of the resulting model.

Lay Summary:

Modern AI systems can recognize images with high accuracy, but they can be easily fooled by tiny, almost invisible changes to the image — a trick known as an adversarial attack. This poses serious safety concerns for real-world applications.To defend against such attacks, a common approach is adversarial training, where the model learns from both clean and slightly altered images. However, faster training methods often use weaker attacks that don’t fully prepare the model for stronger, more dangerous ones.Our research proposes a new mathematical method that makes models more robust, even against attacks they haven't seen before. We adjust the way the model learns from difficult examples by focusing on worst-case situations. This leads to better protection against strong attacks.Importantly, our method maintains the time efficiency of fast training approaches while significantly enhancing robustness. It can also be applied to many existing systems, offering a practical way to improve the safety of AI without significant training delays.

Live content is unavailable. Log in and register to view live content