Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning

Toward Testing Deep Learning Library via Model Fuzzing

Wei Kong · huayang cao · Tong Wang · Yuanping Nie · hu li · Xiaohui Kuang

Keywords: [ Model Fuzzing ] [ Deep Learning Library ]


Abstract:

The increasing adoption of deep learning (DL) technologies in safety-critical industries has brought about a corresponding rise in security challenges. While the security of DL frameworks (Tensorflow, Pytorch, PaddlePaddle), which serve as the foundation of various DL models, has not garnered the attention they rightfully deserve. The vulnerabilities of DL frameworks can cause significant security risks such as model reliability and data leakage. In this research project, we address this challenge by employing a specifically designed model fuzzing method. Firstly, we generate diverse models to test library implementations in the training and prediction phases by optimized mutation strategies. Furthermore, we consider the seed performance score including coverage, discovery time, and mutation numbers to prioritize the selection of model seeds. Our algorithm also selects the optimal mutation strategy based on heuristics to expand inconsistencies. Finally, to evaluate the effectiveness of our scheme, we implement our test framework and conduct the experiment on existing DL frameworks. The preliminary results demonstrate that this is a promising direction.

Chat is not available.