Skip to yearly menu bar Skip to main content


Oral
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning

Provably Robust Cost-Sensitive Learning via Randomized Smoothing

Keywords: [ robustness certification ] [ cost-sensitive learning ] [ randomized smoothing ]


Abstract:

We focus on learning adversarially robust classifiers under a cost-sensitive scenario, where the potential harm of different class-wise adversarial transformations is encoded in a cost matrix. Existing methods are either empirical that can not certify cost-sensitive robustness or suffer from inherent scalability issues. In this work, we study whether randomized smoothing, a more scalable robustness certification framework can be leveraged to certify the cost-sensitive robustness. We first show how to extend the vanilla randomized smoothing pipeline to provide rigorous guarantees for cost-sensitive robustness for arbitrary binary cost matrices. However, when extending the standard smoothed classifier training method to cost-sensitive settings, the naive reweighting scheme does not achieve the desired performance due to the indirect optimization of the base classifier. Inspired by this observation, we propose a more direct training method with fine-grained certified radius optimization schemes designed for different data subgroups. Experiments on image benchmark datasets demonstrate that without sacrificing the overall accuracy, our method significantly improves certified cost-sensitive robustness.

Chat is not available.