Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Shift happens: Crowdsourcing metrics and test datasets beyond ImageNet

Towards Systematic Robustness for Scalable Visual Recognition

Mohamed Omran · Bernt Schiele


Abstract:

There is widespread interest in developing robust classification models, that can handle challenging object, scene, or image properties. While work in this area targets diverse kinds of robust behaviour, we argue in this work in favour requirement that should apply more generally: For robust behaviour to be scalable, it should transfer flexibly across familiar object classes, and not be separately learned for every class of interest. To this end, we propose the systematic robustness setting, in which certain combinations of classes and attributes are systematically excluded during training. Unlike prior work which studies systematic generalisation in DNNs or their susceptibility to spurious correlations, we use synthetic operations and data sampling to scale such experiments up to large-scale naturalistic datasets. This allows for a compromise between ecological validity of the task and strict experimental controls. We analyse a variety of models and learning objectives, and find that robustness to different shifts such as image corruptions, image rotations, and abstract object depictions are perhaps harder to deal with than previous results would suggest. This extended abstract describes the general experimental setting, our specific instantiations, and a metric to measure systematic robustness.

Chat is not available.