Timezone: »

Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training
Xuxi Chen · Wuyang Chen · Tianlong Chen · Ye Yuan · Chen Gong · Kewei Chen · Zhangyang “Atlas” Wang

Thu Jul 16 06:00 PM -- 06:45 PM & Fri Jul 17 04:00 AM -- 04:45 AM (PDT) @

Many real-world applications have to tackle the Positive-Unlabeled (PU) learning problem, i.e., learning binary classifiers from a large amount of unlabeled data and a few labeled positive examples. While current state-of-the-art methods employ importance reweighting to design various biased or unbiased risk estimators, they completely ignored the learning capability of the model itself, which could provide reliable supervision. This motivates us to propose a novel Self-PU learning framework, which seamlessly integrates PU learning and self-training. Self-PU highlights three ``self''-oriented building blocks: a self-paced training algorithm that adaptively discovers and augments confident positive/negative examples as the training proceeds; a self-reweighted, instance-aware loss; and a self-distillation scheme that introduces teacher-students learning as an effective regularization for PU learning. We demonstrate the state-of-the-art performance of Self-PU on common PU learning benchmarks (MNIST and CIFAR10), which compare favorably against the latest competitors. Moreover, we study a real-world application of PU learning, i.e., classifying brain images of Alzheimer's Disease. Self-PU obtains significantly improved results on the renowned Alzheimer's Disease Neuroimaging Initiative (ADNI) database over existing methods.

Author Information

Xuxi Chen (University of Science and Technology of China)
Wuyang Chen (Texas A&M University)
Tianlong Chen (Texas A&M University)
Ye Yuan (Texas A&M University)
Chen Gong (Nanjing University of Science and Technology)
Kewei Chen (Green Valley Pharmaceutical LLC)
Zhangyang “Atlas” Wang (University of Texas at Austin)

More from the Same Authors