Timezone: »

 
Spotlight
Model Agnostic Sample Reweighting for Out-of-Distribution Learning
Xiao Zhou · Yong LIN · Renjie Pi · Weizhong Zhang · Renzhe Xu · Peng Cui · Tong Zhang

Tue Jul 19 08:35 AM -- 08:40 AM (PDT) @ Room 318 - 320

Distributionally robust optimization (DRO) and invariant risk minimization (IRM) are two popular methods proposed to improve out-of-distribution (OOD) generalization performance of machine learning models. While effective for small models, it has been observed that these methods can be vulnerable to overfitting with large overparameterized models. This work proposes a principled method, Model Agnostic samPLe rEweighting (MAPLE), to effectively address OOD problem, especially in overparameterized scenarios. Our key idea is to find an effective reweighting of the training samples so that the standard empirical risk minimization training of a large model on the weighted training data leads to superior OOD generalization performance. The overfitting issue is addressed by considering a bilevel formulation to search for the sample reweighting, in which the generalization complexity depends on the search space of sample weights instead of the model size. We present theoretical analysis in linear case to prove the insensitivity of MAPLE to model size, and empirically verify its superiority in surpassing state-of-the-art methods by a large margin.

Author Information

Xiao Zhou (HKUST)
Yong LIN (The Hong Kong University of Science and Technology)
Renjie Pi (Hong Kong University of Science and Technology)
Weizhong Zhang (The Hong Kong University of Science and Technology)
Renzhe Xu (Tsinghua University)
Peng Cui (Tsinghua University)
Peng Cui

Peng Cui is an Associate Professor in Tsinghua University. He got his PhD degree from Tsinghua University in 2010. His research interests include causal inference and stable learning, network representation learning, and human behavioral modeling. He has published more than 100 papers in prestigious conferences and journals in data mining and multimedia. His recent research won the IEEE Multimedia Best Department Paper Award, SIGKDD 2016 Best Paper Finalist, ICDM 2015 Best Student Paper Award, SIGKDD 2014 Best Paper Finalist, IEEE ICME 2014 Best Paper Award, ACM MM12 Grand Challenge Multimodal Award, and MMM13 Best Paper Award. He is the Associate Editors of IEEE TKDE, IEEE TBD, ACM TIST, and ACM TOMM etc. He has served as program co-chair and area chair of several major machine learning and artificial intelligence conferences, such as IJCAI, AAAI, ACM CIKM, ACM Multimedia etc.

Tong Zhang (HKUST)
Tong Zhang

Tong Zhang is a professor of Computer Science and Mathematics at the Hong Kong University of Science and Technology. His research interests are machine learning, big data and their applications. He obtained a BA in Mathematics and Computer Science from Cornell University, and a PhD in Computer Science from Stanford University. Before joining HKUST, Tong Zhang was a professor at Rutgers University, and worked previously at IBM, Yahoo as research scientists, Baidu as the director of Big Data Lab, and Tencent as the founding director of AI Lab. Tong Zhang was an ASA fellow and IMS fellow, and has served as the chair or area-chair in major machine learning conferences such as NIPS, ICML, and COLT, and has served as associate editors in top machine learning journals such as PAMI, JMLR, and Machine Learning Journal.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors