Workshop: New Frontiers in Adversarial Machine Learning

Machine Learning Security: Lessons Learned and Future Challenges

Battista Biggio


In this talk, I will briefly review some recent advancements in the area of machine learning security with a critical focus on the main factors which are hindering progress in this field. These include the lack of an underlying, systematic and scalable framework to properly evaluate machine-learning models under adversarial and out-of-distribution scenarios, along with suitable tools for easing their debugging. The latter may be helpful to unveil flaws in the evaluation process, as well as the presence of potential dataset biases and spurious features learned during training. I will finally report concrete examples of what our laboratory has been recently working on to enable a first step towards overcoming these limitations, in the context of Android and Windows malware detection.

Battista Biggio (MSc 2006, PhD 2010) is an Assistant Professor at the University of Cagliari, Italy, and co-founder of Pluribus One ( His research interests include machine learning and cybersecurity. He has provided pioneering contributions in the area of ML security, demonstrating the first gradient-based evasion and poisoning attacks, and how to mitigate them, playing a leading role in the establishment and advancement of this research field. He has managed six research projects, and served as a PC member for the most prestigious conferences and journals in the area of ML and computer security (ICML, NeurIPS, ICLR, IEEE SP, USENIX Security). He chaired the IAPR TC on Statistical Pattern Recognition Techniques (2016-2020), co-organized S+SSPR, AISec and DLS, and served as Associate Editor for IEEE TNNLS, IEEE CIM and Pattern Recognition. He is a senior member of the IEEE and ACM, and a member of the IAPR and ELLIS.

Chat is not available.