Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Shift happens: Crowdsourcing metrics and test datasets beyond ImageNet

OOD-CV: A Benchmark for Robustness to Individual Nuisances in Real-World Out-of-Distribution Shifts

Bingchen Zhao · Shaozuo Yu · Wufei Ma · Mingxin Yu · Shenxiao Mei · Angtian Wang · Ju He · Alan Yuille · Adam Kortylewski


Abstract:

Enhancing the robustness of vision algorithms in real-world scenarios is challenging. One reason is that existing robustness benchmarks are limited, as they either rely on synthetic data or ignore the effects of individual nuisance factors. We introduce ROBIN, a benchmark dataset that includes out-of-distribution examples of 10 object categories in terms of pose, shape, texture, context and the weather conditions, and enables benchmarking models for image classification, object detection, and 3D pose estimation. Our experiments using popular baseline methods reveal that: 1) Some nuisance factors have a much stronger negative effect on the performance compared to others, also depending on the vision task. 2) Current approaches to enhance robustness have only marginal effects, and can even reduce robustness. 3) We do not observe significant differences between convolutional and transformer architectures. We believe our dataset provides a rich testbed to study robustness and will help push forward research in this area.

Chat is not available.