Timezone: »

 
Workshop
Shift happens: Crowdsourcing metrics and test datasets beyond ImageNet
Roland S. Zimmermann · Julian Bitterwolf · Evgenia Rusak · Steffen Schneider · Matthias Bethge · Wieland Brendel · Matthias Hein

Fri Jul 22 06:00 AM -- 04:15 PM (PDT) @ Ballroom 4
Event URL: https://shift-happens-benchmark.github.io/ »

Deep vision models are prone to short-cut learning, vulnerable to adversarial attacks, as well as natural and synthetic image corruptions. While OOD test sets have been proposed to measure the vulnerability of DNNs to distribution shifts of different kinds, it has been shown that the performance on popular OOD test sets such as ImageNet-C or ObjectNet is strongly correlated to the performance on clean ImageNet. Since performance on clean ImageNet clearly tests IID but not OOD generalization, this calls for new challenging OOD datasets testing different aspects of generalization.Our goal is to bring the robustness, domain adaptation, and out-of-distribution detection communities together to work on a new broad-scale benchmark that tests diverse aspects of current computer vision models and guides the way towards the next generation of models. Submissions to this workshop will contain novel datasets, metrics and evaluation settings.

Author Information

Roland S. Zimmermann (University of Tübingen, International Max Planck Research School for Intelligent Systems)
Julian Bitterwolf (University of Tübingen)
Evgenia Rusak (University of Tuebingen)
Steffen Schneider (University of Tuebingen / EPFL / ELLIS)
Matthias Bethge (University of Tübingen)
Wieland Brendel (University of Tübingen)
Matthias Hein (University of Tübingen)

More from the Same Authors