Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Next Generation of AI Safety

Certified Robustness in NLP Under Bounded Levenshtein Distance

Elias Abad Rocamora · Grigorios Chrysos · Volkan Cevher

Keywords: [ Lipschitz constant ] [ Robustness verification ] [ Text classifiers ]


Abstract: Natural Language Processing (NLP) models suffer from small perturbations, that if chosen adversarially, can dramatically change the output of the model. Verification methods can provide robustness certificates against such adversarial perturbations, by computing a sound lower bound on the robust accuracy. Nevertheless, existing verification methods in NLP incur in prohibitive costs and cannot practically handle Levenshtein distance constraints. We propose the first method for computing the Lipschitz constant of convolutional classifiers with respect to the Levenshtein distance. We use this Lipschitz constant estimation method for training 1-Lipschitz classifiers. This enables computing the certified radius of a classifier in a single forward pass. Our method, LipsLev, is able to obtain $38.00$% and $14.13$% verified accuracy at distance $1$ and $2$ respectively in the AG-News dataset. We believe our work can open the door to more efficiently training and verifying NLP models.

Chat is not available.