Skip to yearly menu bar Skip to main content


Talk

Self-Paced Co-training

Fan Ma · Deyu Meng · Qi Xie · Zina Li · Xuanyi Dong

C4.6 & C4.7

Abstract:

Co-training is a well-known semi-supervised learning approach which trains classifiers on two different views and exchanges labels of unlabeled instances in an iterative way. During co-training process, labels of unlabeled instances in the training pool are very likely to be false especially in the initial training rounds, while the standard co-training algorithm utilizes a "draw without replacement" manner and does not remove these false labeled instances from training. This issue not only tends to degenerate its performance but also hampers its fundamental theory. Besides, there is no optimization model to explain what objective a cotraining process optimizes. To these issues, in this study we design a new co-training algorithm named self-paced cotraining (SPaCo) with a ``draw with replacement" learning mode. The rationality of SPaCo can be proved under theoretical assumptions utilized in traditional co-training research, and furthermore, the algorithm exactly complies with the alternative optimization process for an optimization model of self-paced curriculum learning, which can be finely explained in robust learning manner. Experimental results substantiate the superiority of the proposed method as compared with current state-of-the-art co-training methods.

Live content is unavailable. Log in and register to view live content