Skip to yearly menu bar Skip to main content


Talk
in
Workshop: New Frontiers in Adversarial Machine Learning

A tale of adversarial attacks & out-of-distribution detection stories in the activation space

Celia Cintas


Abstract:

Abstract: Most deep learning models assume ideal conditions and rely on the assumption that test/production data comes from the in-distribution samples from the training data. However, this assumption is not satisfied in most real-world applications. Test data could differ from the training data either due to adversarial perturbations, new classes, generated content, noise, or other distribution changes. These shifts in the input data can lead to classifying unknown types, classes that do not appear during training, as known with high confidence. On the other hand, adversarial perturbations in the input data can cause a sample to be incorrectly classified. In this talk, we will discuss approaches based on group and individual subset scanning methods from the anomalous pattern detection domain and how they can be applied over off-the-shelf DL models.

Short bio: Celia Cintas is a Research Scientist at IBM Research Africa - Nairobi. She is a member of the AI Science team at the Kenya Lab. Her current research focuses on the improvement of ML techniques to address challenges in Global Health and exploring subset scanning for anomalous pattern detection under generative models. Previously, a grantee from the National Scientific and Technical Research Council (CONICET) working on Deep Learning techniques for population studies at LCI-UNS and IPCSH-CONICET as part of the Consortium for Analysis of the Diversity and Evolution of Latin America (CANDELA). She holds a Ph.D. in Computer Science from Universidad del Sur (Argentina). https://celiacintas.github.io/about/

Chat is not available.