Timezone: »
Detecting out-of-distribution examples is important for safety-critical machine learning applications such as detecting novel biological phenomena and self-driving cars. However, existing research mainly focuses on simple small-scale settings. To set the stage for more realistic out-of-distribution detection, we depart from small-scale settings and explore large-scale multiclass and multi-label settings with high-resolution images and thousands of classes. To make future work in real-world settings possible, we create new benchmarks for three large-scale settings. To test ImageNet multiclass anomaly detectors, we introduce the Species dataset containing over 700,000 images and over a thousand anomalous species. We leverage ImageNet-21K to evaluate PASCAL VOC and COCO multilabel anomaly detectors. Third, we introduce a new benchmark for anomaly segmentation by introducing a segmentation benchmark with road anomalies. We conduct extensive experiments in these more realistic settings for out-of-distribution detection and find that a surprisingly simple detector based on the maximum logit outperforms prior methods in all the large-scale multi-class, multi-label, and segmentation tasks, establishing a simple new baseline for future work.
Author Information
Dan Hendrycks (UC Berkeley)
Steven Basart (University of Chicago)
Mantas Mazeika (UIUC)
Andy Zou (UC Berkeley)
joseph kwon (Yale University)
Mohammadreza Mostajabi (Zendar)
Jacob Steinhardt (UC Berkeley)
Dawn Song (UC Berkeley)
Dawn Song is a Professor in the Department of Electrical Engineering and Computer Science at UC Berkeley. Her research interest lies in deep learning, security, and blockchain. She has studied diverse security and privacy issues in computer systems and networks, including areas ranging from software security, networking security, distributed systems security, applied cryptography, blockchain and smart contracts, to the intersection of machine learning and security. She is the recipient of various awards including the MacArthur Fellowship, the Guggenheim Fellowship, the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review TR-35 Award, the George Tallman Ladd Research Award, the Okawa Foundation Research Award, the Li Ka Shing Foundation Women in Science Distinguished Lecture Series Award, the Faculty Research Award from IBM, Google and other major tech companies, and Best Paper Awards from top conferences in Computer Security and Deep Learning. She obtained her Ph.D. degree from UC Berkeley. Prior to joining UC Berkeley as a faculty, she was a faculty at Carnegie Mellon University from 2002 to 2007.
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Scaling Out-of-Distribution Detection for Real-World Settings »
Wed. Jul 20th 09:45 -- 09:50 PM Room Ballroom 1 & 2
More from the Same Authors
-
2023 Poster: Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the Machiavelli Benchmark »
Alexander Pan · Jun Shern Chan · Andy Zou · Nathaniel Li · Steven Basart · Thomas Woodside · Hanlin Zhang · Scott Emmons · Dan Hendrycks -
2023 Oral: Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the Machiavelli Benchmark »
Alexander Pan · Jun Shern Chan · Andy Zou · Nathaniel Li · Steven Basart · Thomas Woodside · Hanlin Zhang · Scott Emmons · Dan Hendrycks -
2022 : Distribution Shift Through the Lens of Explanations »
Jacob Steinhardt -
2022 Poster: How to Steer Your Adversary: Targeted and Efficient Model Stealing Defenses with Gradient Redirection »
Mantas Mazeika · Bo Li · David Forsyth -
2022 Poster: Predicting Out-of-Distribution Error with the Projection Norm »
Yaodong Yu · Zitong Yang · Alexander Wei · Yi Ma · Jacob Steinhardt -
2022 Poster: More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize »
Alexander Wei · Wei Hu · Jacob Steinhardt -
2022 Spotlight: How to Steer Your Adversary: Targeted and Efficient Model Stealing Defenses with Gradient Redirection »
Mantas Mazeika · Bo Li · David Forsyth -
2022 Spotlight: Predicting Out-of-Distribution Error with the Projection Norm »
Yaodong Yu · Zitong Yang · Alexander Wei · Yi Ma · Jacob Steinhardt -
2022 Spotlight: More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize »
Alexander Wei · Wei Hu · Jacob Steinhardt -
2022 Poster: Describing Differences between Text Distributions with Natural Language »
Ruiqi Zhong · Charlie Snell · Dan Klein · Jacob Steinhardt -
2022 Spotlight: Describing Differences between Text Distributions with Natural Language »
Ruiqi Zhong · Charlie Snell · Dan Klein · Jacob Steinhardt -
2021 Workshop: Workshop on Socially Responsible Machine Learning »
Chaowei Xiao · Animashree Anandkumar · Mingyan Liu · Dawn Song · Raquel Urtasun · Jieyu Zhao · Xueru Zhang · Cihang Xie · Xinyun Chen · Bo Li -
2021 Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning »
Hang Su · Yinpeng Dong · Tianyu Pang · Eric Wong · Zico Kolter · Shuo Feng · Bo Li · Henry Liu · Dan Hendrycks · Francesco Croce · Leslie Rice · Tian Tian -
2021 : Invited Talk: Dawn Song. Towards building a responsible data economy. »
Dawn Song -
2021 Workshop: Uncertainty and Robustness in Deep Learning »
Balaji Lakshminarayanan · Dan Hendrycks · Sharon Li · Jasper Snoek · Silvia Chiappa · Sebastian Nowozin · Thomas Dietterich -
2020 Workshop: Uncertainty and Robustness in Deep Learning Workshop (UDL) »
Sharon Yixuan Li · Balaji Lakshminarayanan · Dan Hendrycks · Thomas Dietterich · Jasper Snoek -
2020 Poster: Rethinking Bias-Variance Trade-off for Generalization of Neural Networks »
Zitong Yang · Yaodong Yu · Chong You · Jacob Steinhardt · Yi Ma -
2019 Workshop: Understanding and Improving Generalization in Deep Learning »
Dilip Krishnan · Hossein Mobahi · Behnam Neyshabur · Behnam Neyshabur · Peter Bartlett · Dawn Song · Nati Srebro -
2019 Workshop: Uncertainty and Robustness in Deep Learning »
Sharon Yixuan Li · Dan Hendrycks · Thomas Dietterich · Balaji Lakshminarayanan · Justin Gilmer -
2019 Poster: Using Pre-Training Can Improve Model Robustness and Uncertainty »
Dan Hendrycks · Kimin Lee · Mantas Mazeika -
2019 Oral: Using Pre-Training Can Improve Model Robustness and Uncertainty »
Dan Hendrycks · Kimin Lee · Mantas Mazeika -
2018 Invited Talk: AI and Security: Lessons, Challenges and Future Directions »
Dawn Song