Timezone: »
This work tackles a central machine learning problem of performance degradation on out-of-distribution (OOD) test sets. The problem is particularly salient in medical imaging based diagnosis system that appears to be accurate but fails when tested in new hospitals/datasets. Recent studies indicate the system might learn shortcut and non-relevant features instead of generalizable features, so-called `good features'. We hypothesize that adversarial training can eliminate shortcut features whereas Saliency guided training can filter out non-relevant features; both are nuisance features accounting for the performance degradation on OOD test sets. With that, we formulate a novel model training scheme for the deep neural network to learn good features for classification and/or detection tasks ensuring a consistent generalization performance on OOD test sets. The experimental results qualitatively and quantitatively demonstrate the superior performance of our method using the benchmark CXR image data sets on classification tasks.
Author Information
Xin Li (Bosch AI)
Yao Qiang (Wayne State University)
CHNEGYIN LI (Wayne State University)
Sijia Liu (Michigan State University)
Dongxiao Zhu (Wayne State University)
Dongxiao Zhu is currently an Associate Professor at Department of Computer Science, Wayne State University. He received the B.S. from Shandong University (1996), the M.S. from Peking University (1999) and the Ph.D. from University of Michigan (2006). Dongxiao Zhu's recent research interests are in Machine Learning and Applications in health informatics, natural language processing, medical imaging and other data science domains. Dr. Zhu is the Director of Machine Learning and Predictive Analytics (MLPA) Lab and the Director of Computer Science Graduate Program at Wayne State University. He has published over 70 peer-reviewed publications and numerous book chapters and he served on several editorial boards of scientific journals. Dr. Zhu's research has been supported by NIH, NSF and private agencies and he has served on multiple NIH and NSF grant review panels. Dr. Zhu has advised numerous students at undergraduate, graduate and postdoctoral levels and his teaching interest lies in programming language, data structures and algorithms, machine learning and data science.
More from the Same Authors
-
2023 : Proximal Compositional Optimization for Distributionally Robust Learning »
Prashant Khanduri · Chengyin Li · RAFI IBN SULTAN · Yao Qiang · Joerg Kliewer · Dongxiao Zhu -
2023 Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning »
Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Baharan Mirzasoleiman · Sanmi Koyejo -
2023 Oral: Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks »
Mohammed Nowaz Rabbani Chowdhury · Shuai Zhang · Meng Wang · Sijia Liu · Pin-Yu Chen -
2023 Poster: Linearly Constrained Bilevel Optimization: A Smoothed Implicit Gradient Approach »
Prashant Khanduri · Ioannis Tsaknakis · Yihua Zhang · Jia Liu · Sijia Liu · Jiawei Zhang · Mingyi Hong -
2023 Poster: Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks »
Mohammed Nowaz Rabbani Chowdhury · Shuai Zhang · Meng Wang · Sijia Liu · Pin-Yu Chen -
2022 Workshop: New Frontiers in Adversarial Machine Learning »
Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Hima Lakkaraju · Sanmi Koyejo -
2022 Poster: Data-Efficient Double-Win Lottery Tickets from Robust Pre-training »
Tianlong Chen · Zhenyu Zhang · Sijia Liu · Yang Zhang · Shiyu Chang · Zhangyang “Atlas” Wang -
2022 Poster: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Spotlight: Data-Efficient Double-Win Lottery Tickets from Robust Pre-training »
Tianlong Chen · Zhenyu Zhang · Sijia Liu · Yang Zhang · Shiyu Chang · Zhangyang “Atlas” Wang -
2022 Spotlight: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Poster: Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling »
Hongkang Li · Meng Wang · Sijia Liu · Pin-Yu Chen · Jinjun Xiong -
2022 Poster: Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization »
Yihua Zhang · Guanhua Zhang · Prashant Khanduri · Mingyi Hong · Shiyu Chang · Sijia Liu -
2022 Spotlight: Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling »
Hongkang Li · Meng Wang · Sijia Liu · Pin-Yu Chen · Jinjun Xiong -
2022 Spotlight: Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization »
Yihua Zhang · Guanhua Zhang · Prashant Khanduri · Mingyi Hong · Shiyu Chang · Sijia Liu -
2022 Poster: Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework »
Ching-Yun (Irene) Ko · Jeet Mohapatra · Sijia Liu · Pin-Yu Chen · Luca Daniel · Lily Weng -
2022 Spotlight: Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework »
Ching-Yun (Irene) Ko · Jeet Mohapatra · Sijia Liu · Pin-Yu Chen · Luca Daniel · Lily Weng -
2021 Poster: Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not? »
Ning Liu · Geng Yuan · Zhengping Che · Xuan Shen · Xiaolong Ma · Qing Jin · Jian Ren · Jian Tang · Sijia Liu · Yanzhi Wang -
2021 Spotlight: Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not? »
Ning Liu · Geng Yuan · Zhengping Che · Xuan Shen · Xiaolong Ma · Qing Jin · Jian Ren · Jian Tang · Sijia Liu · Yanzhi Wang