Adversarial Robustness for Code

Pavol Bielik, Martin Vechev,

Abstract Paper

Tue Jul 14 11 a.m. PDT [iCal] [ Join Zoom ]
Tue Jul 14 11 p.m. PDT [iCal] [ Join Zoom ]
Please do not share or post zoom links

Abstract:

Machine learning and deep learning in particular has been recently used to successfully address many tasks in the domain of code including -- finding and fixing bugs, code completion, decompilation, malware detection, type inference and many others. However, the issue of adversarial robustness of models for code has gone largely unnoticed. In this work, we explore this issue by: (i) instantiating adversarial attacks for code (a domain with discrete and highly structured inputs), (ii) showing that, similar to other domains, neural models for code are vulnerable to adversarial attacks, and (iii) developing a set of novel techniques that enable training robust and accurate models of code.

Chat is not available.