Timezone: »

Provable Instance Specific Robustness via Linear Constraints
Ahmed Imtiaz Humayun · Josue Casco-Rodriguez · Randall Balestriero · Richard Baraniuk
Event URL: https://openreview.net/forum?id=aVbG8bM1wg »

Deep Neural Networks (DNNs) trained for classification tasks are vulnerable to adversarial attacks. But not all the classes are equally vulnerable. Adversarial training does not make all classes or groups equally robust as well. For example, in classification tasks with long-tailed distributions, classes are asymmetrically affected during adversarial training, with lower robust accuracy for less frequent classes. In this regard, we propose a provable robustness method by leveraging the continuous piecewise-affine (CPA) nature of DNNs. Our method can impose linearity constraints on the decision boundary, as well as the DNN CPA partition, without requiring any adversarial training. Using such constraints, we show that the margin between the decision boundary and minority classes can be increased in a provable manner. We also present qualitative and quantitative validation of our method for class-specific robustness.

Author Information

Ahmed Imtiaz Humayun (Rice University)
Josue Casco-Rodriguez (Rice University)
Randall Balestriero (Rice University)
Richard Baraniuk (OpenStax / Rice University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors