Skip to yearly menu bar Skip to main content


Oral
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning

Toward Testing Deep Learning Library via Model Fuzzing

Keywords: [ Deep Learning Library ] [ Model Fuzzing ]


Abstract:

The irreversible tendency to empower the industry with deep learning (DL) capabilities is rising new security challenges. A DL-based system will be vulnerable to serious attacks if the vulnerabilities of underlying DL frameworks (e.g. Tensorflow, Pytorch) are exploited. It is crucial to test the DL framework to bridge the gap between security requirements and deployment urgency. A specifically-designed model fuzzing method will be used in my research project to address this challenge. Firstly, we generate diverse models to test libraries implementations in the training and prediction phases using the optimized mutation strategies. Furthermore, we consider the seed performance score including coverage, discovery time and mutation numbers when selecting model seeds with higher priority. Our algorithm also selects the optimal mutation strategy based on heuristics to expand inconsistencies. Finally, to evaluate the effectiveness of our scheme, we implement our test framework and conduct the experiment on Pytorch, Tensorflow and Theano. The preliminary results demonstrate that this is a promising direction and worth to be further research.

Chat is not available.