Timezone: »
Deep learning has recently achieved initial success in program analysis tasks such as bug detection. Lacking real bugs, most existing works construct training and test data by injecting synthetic bugs into correct programs. Despite achieving high test accuracy (e.g., >90%), the resulting bug detectors are found to be surprisingly unusable in practice, i.e., <10% precision when used to scan real software repositories. In this work, we argue that this massive performance difference is caused by a distribution shift, i.e., a fundamental mismatch between the real bug distribution and the synthetic bug distribution used to train and evaluate the detectors. To address this key challenge, we propose to train a bug detector in two phases, first on a synthetic bug distribution to adapt the model to the bug detection domain, and then on a real bug distribution to drive the model towards the real distribution. During these two phases, we leverage a multi-task hierarchy, focal loss, and contrastive learning to further boost performance. We evaluate our approach extensively on three widely studied bug types, for which we construct new datasets carefully designed to capture the real bug distribution. The results demonstrate that our approach is practically effective and successfully mitigates the distribution shift: our learned detectors are highly performant on both our test set and the latest version of open source repositories. Our code, datasets, and models are publicly available at https://github.com/eth-sri/learning-real-bug-detector.
Author Information
Jingxuan He (ETH Zurich)
Luca Beurer-Kellner (ETH Zürich)
Martin Vechev (ETH Zurich)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: On Distribution Shift in Learning-based Bug Detectors »
Thu. Jul 21st through Fri the 22nd Room Hall E #128
More from the Same Authors
-
2021 : Automated Discovery of Adaptive Attacks on Adversarial Defenses »
Chengyuan Yao · Pavol Bielik · Petar Tsankov · Martin Vechev -
2022 Workshop: Workshop on Formal Verification of Machine Learning »
Huan Zhang · Leslie Rice · Kaidi Xu · aditi raghunathan · Wan-Yi Lin · Cho-Jui Hsieh · Clark Barrett · Martin Vechev · Zico Kolter -
2021 Poster: TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer »
Berkay Berabi · Jingxuan He · Veselin Raychev · Martin Vechev -
2021 Poster: Scalable Certified Segmentation via Randomized Smoothing »
Marc Fischer · Maximilian Baader · Martin Vechev -
2021 Spotlight: TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer »
Berkay Berabi · Jingxuan He · Veselin Raychev · Martin Vechev -
2021 Spotlight: Scalable Certified Segmentation via Randomized Smoothing »
Marc Fischer · Maximilian Baader · Martin Vechev -
2021 Poster: PODS: Policy Optimization via Differentiable Simulation »
Miguel Angel Zamora Mora · Momchil Peychev · Sehoon Ha · Martin Vechev · Stelian Coros -
2021 Spotlight: PODS: Policy Optimization via Differentiable Simulation »
Miguel Angel Zamora Mora · Momchil Peychev · Sehoon Ha · Martin Vechev · Stelian Coros -
2020 Poster: Adversarial Robustness for Code »
Pavol Bielik · Martin Vechev -
2020 Poster: Adversarial Attacks on Probabilistic Autoregressive Forecasting Models »
Raphaël Dang-Nhu · Gagandeep Singh · Pavol Bielik · Martin Vechev -
2019 Poster: DL2: Training and Querying Neural Networks with Logic »
Marc Fischer · Mislav Balunovic · Dana Drachsler-Cohen · Timon Gehr · Ce Zhang · Martin Vechev -
2019 Oral: DL2: Training and Querying Neural Networks with Logic »
Marc Fischer · Mislav Balunovic · Dana Drachsler-Cohen · Timon Gehr · Ce Zhang · Martin Vechev -
2018 Poster: Training Neural Machines with Trace-Based Supervision »
Matthew Mirman · Dimitar Dimitrov · Pavle Djordjevic · Timon Gehr · Martin Vechev -
2018 Oral: Training Neural Machines with Trace-Based Supervision »
Matthew Mirman · Dimitar Dimitrov · Pavle Djordjevic · Timon Gehr · Martin Vechev -
2018 Poster: Differentiable Abstract Interpretation for Provably Robust Neural Networks »
Matthew Mirman · Timon Gehr · Martin Vechev -
2018 Oral: Differentiable Abstract Interpretation for Provably Robust Neural Networks »
Matthew Mirman · Timon Gehr · Martin Vechev