Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

Towards Adaptive Attacks on Constrained Tabular Machine Learning

Thibault Simonetto · Salah GHAMIZI · Maxime Cordy

Keywords: [ security ] [ constrained machine learning ] [ Machine Learning ] [ Adversarial Attacks ] [ tabular data ]


Abstract:

State-of-the-art deep learning models for tabular data have recently achieved acceptable performance to be deployed in industrial settings. Contrary to computer vision, there is to date no efficient constrained whitebox attack to evaluate the adversarial robustness of deep tabular models due to intrinsic properties of tabular data such as categorical features, immutability, and feature relationship constraints. To fill this gap, we propose CAPGD, the first efficient evasion attack for constrained tabular deep learning models. CAPGD is an iterative parameter-free attack to generate adversarial examples under constraints. We evaluate CAPGD across four critical use cases: credit scoring, phishing, botnet attacks, and ICU survival prediction. Our empirical study covers 5 modern tabular deep learning architectures and demonstrates the effectiveness of our attack which improves over the most effective constrained attack by 81\% points.

Chat is not available.