Timezone: »
Deep learning has recently achieved initial success in program analysis tasks such as bug detection. Lacking real bugs, most existing works construct training and test data by injecting synthetic bugs into correct programs. Despite achieving high test accuracy (e.g., >90%), the resulting bug detectors are found to be surprisingly unusable in practice, i.e., <10% precision when used to scan real software repositories. In this work, we argue that this massive performance difference is caused by a distribution shift, i.e., a fundamental mismatch between the real bug distribution and the synthetic bug distribution used to train and evaluate the detectors. To address this key challenge, we propose to train a bug detector in two phases, first on a synthetic bug distribution to adapt the model to the bug detection domain, and then on a real bug distribution to drive the model towards the real distribution. During these two phases, we leverage a multi-task hierarchy, focal loss, and contrastive learning to further boost performance. We evaluate our approach extensively on three widely studied bug types, for which we construct new datasets carefully designed to capture the real bug distribution. The results demonstrate that our approach is practically effective and successfully mitigates the distribution shift: our learned detectors are highly performant on both our test set and the latest version of open source repositories. Our code, datasets, and models are publicly available at https://github.com/eth-sri/learning-real-bug-detector.
Author Information
Jingxuan He (ETH Zurich)
Luca Beurer-Kellner (ETH Zürich)
Martin Vechev (ETH Zurich)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: On Distribution Shift in Learning-based Bug Detectors »
Thu. Jul 21st 03:55 -- 04:00 PM Room Hall F
More from the Same Authors
-
2021 : Automated Discovery of Adaptive Attacks on Adversarial Defenses »
Chengyuan Yao · Pavol Bielik · Petar Tsankov · Martin Vechev -
2023 : Incentivizing Honesty among Competitors in Collaborative Learning »
Florian Dorner · Nikola Konstantinov · Georgi Pashaliev · Martin Vechev -
2023 : Programmable Synthetic Tabular Data Generation »
Mark Vero · Mislav Balunovic · Martin Vechev -
2023 : Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning »
Kostadin Garov · Dimitar I. Dimitrov · Nikola Jovanović · Martin Vechev -
2023 : Large Language Models are Zero-Shot Multi-Tool Users »
Luca Beurer-Kellner · Marc Fischer · Martin Vechev -
2023 : LMQL Chat: Scripted Chatbot Development »
Luca Beurer-Kellner · Marc Fischer · Martin Vechev -
2023 : Large Language Models for Code: Security Hardening and Adversarial Testing »
Jingxuan He · Martin Vechev -
2023 : Connecting Certified and Adversarial Training »
Yuhao Mao · Mark Müller · Marc Fischer · Martin Vechev -
2023 : Understanding Certified Training with Interval Bound Propagation »
Yuhao Mao · Mark Müller · Marc Fischer · Martin Vechev -
2023 Workshop: 2nd Workshop on Formal Verification of Machine Learning »
Mark Müller · Brendon G. Anderson · Leslie Rice · Zhouxing Shi · Shubham Ugare · Huan Zhang · Martin Vechev · Zico Kolter · Somayeh Sojoudi · Cho-Jui Hsieh -
2023 Poster: FARE: Provably Fair Representation Learning with Practical Certificates »
Nikola Jovanović · Mislav Balunovic · Dimitar I. Dimitrov · Martin Vechev -
2023 Poster: TabLeak: Tabular Data Leakage in Federated Learning »
Mark Vero · Mislav Balunovic · Dimitar I. Dimitrov · Martin Vechev -
2022 Workshop: Workshop on Formal Verification of Machine Learning »
Huan Zhang · Leslie Rice · Kaidi Xu · aditi raghunathan · Wan-Yi Lin · Cho-Jui Hsieh · Clark Barrett · Martin Vechev · Zico Kolter -
2021 Poster: TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer »
Berkay Berabi · Jingxuan He · Veselin Raychev · Martin Vechev -
2021 Poster: Scalable Certified Segmentation via Randomized Smoothing »
Marc Fischer · Maximilian Baader · Martin Vechev -
2021 Spotlight: TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer »
Berkay Berabi · Jingxuan He · Veselin Raychev · Martin Vechev -
2021 Spotlight: Scalable Certified Segmentation via Randomized Smoothing »
Marc Fischer · Maximilian Baader · Martin Vechev -
2021 Poster: PODS: Policy Optimization via Differentiable Simulation »
Miguel Angel Zamora Mora · Momchil Peychev · Sehoon Ha · Martin Vechev · Stelian Coros -
2021 Spotlight: PODS: Policy Optimization via Differentiable Simulation »
Miguel Angel Zamora Mora · Momchil Peychev · Sehoon Ha · Martin Vechev · Stelian Coros -
2020 Poster: Adversarial Robustness for Code »
Pavol Bielik · Martin Vechev -
2020 Poster: Adversarial Attacks on Probabilistic Autoregressive Forecasting Models »
Raphaël Dang-Nhu · Gagandeep Singh · Pavol Bielik · Martin Vechev -
2019 Poster: DL2: Training and Querying Neural Networks with Logic »
Marc Fischer · Mislav Balunovic · Dana Drachsler-Cohen · Timon Gehr · Ce Zhang · Martin Vechev -
2019 Oral: DL2: Training and Querying Neural Networks with Logic »
Marc Fischer · Mislav Balunovic · Dana Drachsler-Cohen · Timon Gehr · Ce Zhang · Martin Vechev -
2018 Poster: Training Neural Machines with Trace-Based Supervision »
Matthew Mirman · Dimitar Dimitrov · Pavle Djordjevic · Timon Gehr · Martin Vechev -
2018 Oral: Training Neural Machines with Trace-Based Supervision »
Matthew Mirman · Dimitar Dimitrov · Pavle Djordjevic · Timon Gehr · Martin Vechev -
2018 Poster: Differentiable Abstract Interpretation for Provably Robust Neural Networks »
Matthew Mirman · Timon Gehr · Martin Vechev -
2018 Oral: Differentiable Abstract Interpretation for Provably Robust Neural Networks »
Matthew Mirman · Timon Gehr · Martin Vechev