Oral
in
Workshop: Shift happens: Crowdsourcing metrics and test datasets beyond ImageNet
CCC: Continuously Changing Corruptions
Ori Press · Steffen Schneider · Matthias Kuemmerer · Matthias Bethge
Many existing datasets for robustness and adaptation evaluation are limited to static distribution shifts. We propose a well-calibrated dataset for continuously changing image corruptions on ImageNet scale. Our benchmark builds on the established common corruptions of ImageNet-C and extends them by applying two corruptions at the same time with finer-grained severities to allow for smooth transitions between corruptions. The benchmark contains random walks through different corruption types with different controlled difficulties and speeds of domain shift. Our dataset can be used to benchmark test-time and domain adaptation algorithms in challenging settings that are closer to real-world applications than typically used static adaptation benchmarks.