Skip to yearly menu bar Skip to main content


Oral
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning

Feature Partition Aggregation: A Fast Certified Defense Against a Union of $\ell_0$ Attacks

Keywords: [ Backdoor Attack ] [ data poisoning ] [ sparse attack ] [ evasion attack ] [ Certified defense ] [ $\ell_0$ attack ]


Abstract: Sparse or $\ell_0$ adversarial attacks arbitrarily perturb an unknown subset of the features. $\ell_0$ robustness analysis is particularly well-suited for heterogeneous (tabular) data where features have different types or scales. State-of-the-art $\ell_0$ certified defenses are based on randomized smoothing and apply to evasion attacks only. This paper proposes feature partition aggregation (FPA) - a certified defense against the union of $\ell_0$ evasion, backdoor, and poisoning attacks. FPA generates its stronger robustness guarantees via an ensemble whose submodels are trained on disjoint feature sets. Compared to state-of-the-art $\ell_0$ defenses, FPA is up to $3,000\times$ faster and provides median robustness guarantees up to $4\times$ larger, meaning FPA provides the additional dimensions of robustness essentially for free.

Chat is not available.