Timezone: »
There is widespread interest in developing robust classification models, that can handle challenging object, scene, or image properties. While work in this area targets diverse kinds of robust behaviour, we argue in this work in favour requirement that should apply more generally: For robust behaviour to be scalable, it should transfer flexibly across familiar object classes, and not be separately learned for every class of interest. To this end, we propose the systematic robustness setting, in which certain combinations of classes and attributes are systematically excluded during training. Unlike prior work which studies systematic generalisation in DNNs or their susceptibility to spurious correlations, we use synthetic operations and data sampling to scale such experiments up to large-scale naturalistic datasets. This allows for a compromise between ecological validity of the task and strict experimental controls. We analyse a variety of models and learning objectives, and find that robustness to different shifts such as image corruptions, image rotations, and abstract object depictions are perhaps harder to deal with than previous results would suggest. This extended abstract describes the general experimental setting, our specific instantiations, and a metric to measure systematic robustness.
Author Information
Mohamed Omran (Max Planck Institute for Informatics)
Bernt Schiele (MPI for Informatics)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 : Towards Systematic Robustness for Scalable Visual Recognition »
Dates n/a. Room
More from the Same Authors
-
2022 : Are We Viewing the Problem of Robust Generalisation through the Appropriate Lens? »
Mohamed Omran · Bernt Schiele -
2022 : Are We Viewing the Problem of Robust Generalisation through the Appropriate Lens? »
Mohamed Omran · Bernt Schiele -
2020 Poster: Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks »
David Stutz · Matthias Hein · Bernt Schiele